Using the information and communications technology data deluge from a semantic perspective of a dynamic challenge: What to learn and what to ignore? -Part 2-
Romanian Distribution Committee Magazine, 2019, vol. 10, issue 4, 17-29
The paper analyses the influences of the Data Deluge (DD), as huge flows of data leveraged or created, in all activity field, by the complex proliferation and exponential development of Information and Communications Technology (ICT), as main driving factor of the progress of the Information society (IS) toward Knowledge Based Society (KBS). As a consequence, the paper further approached the dynamic processes where the ICT generates the data, by the complex symbiosis of ICT with humans. In a systemic approach, the paper shortly analyses the revolutionary impact of DD on science, but also the way the most of DD is generated (including the challenge of the technologic progress necessary for accomplish this task), resulting that the DD has leveraged a new reality, called eScience, considered the fourth paradigm, after experiment, theory and simulations, where scientists were no longer interacting directly with the phenomena, which not only open the exploration of (inaccessible) new fields, but provide the base of data-intensive science - one of the mechanisms where the World spiral development is produced using the multiplication force of ICT/DD. The concrete features of this new era (visible at CERN, climate change forecasting or critical national infrastructures), represent fundamental trends in solving the most critical challenges of DD and include Big data analytics, real time processing/storage on site and the integration of ICT advances like Internet of Things (IoT), artificial intelligence (AI), Cloud, Edge, Fog etc. Among these trends, the new phase of AI is now the ICT most important hype, remarkable by machine learning (ML), deep learning (DL) and further the emerging cognitive computing, but their overall results are also highly dependent on human intelligence (HI) for delivering the optimal knowledge refining. The long road from data to knowledge or decisions has many steps, depending on (the amount of) data, but also on specific field/algorithm and humans, as the highly performant applications need usually (for training) sample data with specific content (not only simple/unstructured … more data), which often ask human supervision, although AI/ML not only tend to get closer to HI, but for specific task to replace humans. Another side of DD challenges is about the most complex and complicate issues of extracting and especially evaluating knowledge gain in the diverse processes involving the design and use of DD/AI/ML at Earth scale. This is about how to manage DD evolution, in each main field of applications, in order to keep them efficacious and efficient. A preliminary conclusion is that such approach could lead to partial reasonable solutions, but it eludes just the most important and difficult issue, from our point of view, i.e. the global effects (influences) of ICT/DD exponential evolution at Earth ecosystem scale. Even mathematically, the subsystems could be optimized by some criteria, but at system level not only it could appear a suboptimal solution, but we could expect to considerable vulnerabilities and risks. Besides the huge benefits, the exponential ICT development (without excluding other connected technologies) could generate at Earth scale less desired consequences like carbon footprint, human dependences, wastes pollution etc., in the general World context of climate changes, resources fading, clean environment, social unbalances and so on. The most relevant case is that DD leverages many applications for climate changes forecasts and this is a benefic influence, but here it could be a vicious circle, because these applications represent only a little part of DD sphere and many others could be less benefic or knowledge providing, like streaming to much video content for entertainment or games, while all applications from sphere contribute to carbon footprint. The analysis should be extended this way to all applications, i.e. it is necessary to evaluate them all, counting, case by case, at two levels: first at local level, the specific benefits and challenges and second, at global level, the complex connections by which they could be added or influence others in superposition (less benefic) processes or consequences. As a consequence, the most complicate and difficult problem of DD/ICT exponential evolutions management is how to obtain, using both DD/ML/AI/ICT and HI resources, that refined knowledge which could provide optimal solutions to every relevant phase of those evolutions, i.e. how to save the Earth and humankind from irreversible consequences using the most powerful tools of science and technology (HI/AI). More than these, the difficulty of this global problem is increasing with the level of complexity and with the speed of changes DD/ICT generate to IS/KBS. In this global problem, the engineers and other specialists involved in developing ICT systems, products and services have a prominent role (unfortunately not always decisive one). In fact, the fundamental problem of optimally refining the knowledge often starts at the technical designers, although the opportunity of developing big projects should involve many levels and different areas specialists, in order to get a multicriterial optimization at global scale. As opportunity means also what knowledge must be refined, this problem is similar with the (university) conscious professor’s dilemma, we all experience (desirably) every year at the beginning of school when we decide what to erase and what to introduce in our course, and generally when a course, book or knowledge become obsolete. The final conclusion is that in our ever-changing DD days, it is almost impossible to precisely know when and how knowledge must be refined (a relative and approximative decision), but this does not mean that humankind have to ignore these challenges, on the contrary, they have to continuously look for getting as much as possible close to optimum, by the available updated data/information and resourses, using AI/HI with desirable wisdom solutions.
Keywords: Data Deluge; Big Data; machine learning; artificial intelligence; learning algorithms; human intelligence; Internet of Things; eScience; data-intensive science; computer algorithms; computing infrastructure; climate changes (search for similar items in EconPapers)
JEL-codes: L63 L86 M15 O31 O33 (search for similar items in EconPapers)
References: View complete reference list from CitEc
Citations: Track citations by RSS feed
Downloads: (external link)
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
Persistent link: https://EconPapers.repec.org/RePEc:rdc:journl:v:10:y:2019:i:4:p:17-29
Access Statistics for this article
More articles in Romanian Distribution Committee Magazine from Romanian Distribution Committee
Bibliographic data for series maintained by Theodor Valentin Purcarea (). This e-mail address is bad, please contact .