EconPapers    
Economics at your fingertips  
 

Using the Information and Communications Technology Data Deluge from a Semantic Perspective of a Dynamic Challenge: What to Learn and What to Ignore?

Victor Greu

Romanian Distribution Committee Magazine, 2019, vol. 10, issue 3, 16-29

Abstract: The paper approaches the Data Deluge which is generated, at World scale, by the flows of data created by the complex proliferation and exponential development of Information and Communications Technologies (ICT), as main driving factor of the progress of the Information society (IS) toward Knowledge Based Society (KBS), but the paper analysis is focused on a systemic approach in order to observe the main premises and features of such complex processes, aiming the optimal efficiency of data generation and use, for humankind and Earth survival. The main emergent (hype) technologies for ICT exponential development in 2019 are considered, as contributing to Data Deluge and including Artificial Intelligence (AI), Internet of Things (IoT), Cloud, Big Data, 3D Printing, Robotic Process Automation, Hardware Robotics, Blockchain, Augmented/Virtual Reality. ICT, but mainly all the above mentioned emergent (hype) technologies, contribute, by their complex processes that involve people, machines and devices in a planetary digital disruption, to the huge phenomenon called Data Deluge, but behind is still, mostly but not exclusively, the Internet (3.0 industry revolution) as a backbone. For the paper entry in Data Deluge issue the CERN amazing “temple” of science and technology is chosen, but social media have also produced crucial changes in the way we live, including business models, although entertainment and other applications that use broadband mobile communications provided are also impressive in generating Data Deluge. All these Data Deluge sources are illustrated with global figures that seem to be more impressive every year (over half a Yottabyte are generated), as the overwhelming park of connected devices and people are exponentially increasing, approximately following the Moore Law consequences as an invitation to Data Deluge. Although the cost of computing and communication is falling to Zero, this reality is not necessary a guarantee that the benefit in information/knowledge is automatically high, but on the contrary, it is necessary to have higher expertise (means/methods) to extract information and eventually knowledge from the Data Deluge. As prominent and relevant source of Data Deluge, CERN project is presented in detail (it includes 22 member states and a global community of 15,000 researchers). The CERN’s mission is pointing international research, technology, education and collaboration. CERN will advance the frontiers of knowledge like the fundamental structure of the Universe, the generation of Universe by Big Bang, the kind of matter within the first moments of the Universe’s existence, understanding the very first moments of Universe after the Big Bang or Dark Matter looking for Antimatter. In addition, CERN will develop new technologies for particle accelerators and detectors, but also for emergent fields like advanced ICT (including Quantum Computing), Web (the World Wide Web was invented at CERN in 1989 by British scientist Tim Berners-Lee) or the (computing) GRID. The medical diagnosis and therapy will also considerably benefit from the unprecedented advances achieved by the CERN. The CERN LHC is a machine of records, including: hottest spots in the galaxy; colder temperatures than outer space; the most sophisticated detectors ever built; the detectors are like gigantic digital cameras built in cathedral-sized caverns. Concluding that CERN is just a tip of the iceberg that Data Deluge is or could become, other relevant examples could be given, but none could reach the CERN unique records (although, in the same class of Data Deluge “giants”, there are Facebook, Google, Amazon, Netflix etc.). A paper goal is to try to analyse what and how, under these storm waves of Data Deluge, is going to melt these icebergs in order to extract and use the best of information/knowledge humankind and Earth need today and especially tomorrow. In the second section of the paper, a disproportional comparison of the antiquity Pyramids with CERN was deliberately chosen just to emphasise the huge role of using the technology advances (mainly enabled by ICT) for generating Data Deluge and then extracting information and eventually knowledge, even from sources (like the antiquity Pyramids) which almost have “run dry” before this new technological support. It is mentioned the importance of the humankind evolution phase we are in, for the relevance of any analysis results, as in every phase the technology advances push the data, information and eventually knowledge generation to a higher level (quite incredible before), which explains also the miracle in the case of Pyramids. The analysis also considered the deep and complex processes where data, information and eventually knowledge are linked with a multitude of goals the people could have when they expect the desired data and look, from a semantic perspective, for using them to fulfil these wishes. The difficulty and complexity of such analyses and optimization approaches are badly increased by the fact that all the mentioned processes premises are fast and nonlinearly changing, mainly because of ICT/IS/KBS exponential pace, this way generating everywhere a dynamic challenge of the mentioned semantic perspective, which in a simple expression could be: What to learn and what to ignore? The paper also approached The Difference Between Data and Knowledge, observing that for leveraging knowledge refining it is necessary to timely think (have a thought) and create appropriate ICT tools because creating large amounts of data does not automatically generate lots of knowledge. Approaching both tools and thoughts related to the processes involved in knowledge creation in this epoque of Data Deluge, we pointed the diversity, complexity and difficulty of semantic context cases where we have to select the optimal data (amount) leading to the desired information and eventually the knowledge benefic to wisdom. As tools prominent example, the heart of CERN computing infrastructure, was given, as the Worldwide LHC Computing Grid (WLCG) includes: 170 computing centres in 42 countries; 1M CPU cores; 1EB of storage; 340 Gb/s transatlantic; 3PB moved per day. The analysis of complex processes where data stream to information and eventually knowledge pointed the two main factors that could influence this (long) road, considering that environmental factors could be located among the Data Deluge sources (where the Data Deluge comes from) discussed mainly in the first section, while the cultural factors are referring to the intimate, diverse, complex and dynamic processes where data are analysed, interpreted or selected by humans or machines, usually (but not exclusively) by semantic methods which naturally benefit from ICT prominent advances like AI/ML/CAIS. The final conclusion is that our analysis needs to be further continued, in order to get deeper (usually, but not always, scientific) insights when looking to Data Deluge, which, in our days comes faster and everywhere.

Keywords: Data Deluge; CERN Large Hadron Collider; semantic methods; Worldwide LHC Computing Grid (WLCG); World Wide Web; Particle Physics; Digital Disruption; Internet of Things; information society; knowledge based society; broadband mobile communications (search for similar items in EconPapers)
JEL-codes: L63 L86 M15 O31 O33 (search for similar items in EconPapers)
Date: 2019
References: Add references at CitEc
Citations: Track citations by RSS feed

Downloads: (external link)
http://crd-aida.ro/RePEc/rdc/v10i3/2.pdf (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:rdc:journl:v:10:y:2019:i:3:p:16-29

Access Statistics for this article

More articles in Romanian Distribution Committee Magazine from Romanian Distribution Committee
Bibliographic data for series maintained by Theodor Valentin Purcarea (). This e-mail address is bad, please contact .

 
Page updated 2019-10-23
Handle: RePEc:rdc:journl:v:10:y:2019:i:3:p:16-29