Followers

Tuesday, April 23, 2024

 ARTIFICIAL INTELLIGENCE:

Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software which enable machines to perceive their environment and uses learning and intelligence to take actions that maximize their chances of achieving defined goals.[1] Such machines may be called AIs.


AI technology is widely used throughout industry, government, and science. Some high-profile applications include advanced web search engines (e.g., Google Search); recommendation systems (used by YouTube, Amazon, and Netflix); interacting via human speech (e.g., Google Assistant, Siri, and Alexa); autonomous vehicles (e.g., Waymo); generative and creative tools (e.g., ChatGPT and AI art); and superhuman play and analysis in strategy games (e.g., chess and Go). However, many AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore.



Alan Turing was the first person to conduct substantial research in the field that he called machine intelligence. Artificial intelligence was founded as an academic discipline in 1956. The field went through multiple cycles of optimism, followed by periods of disappointment and loss of funding, known as AI winter. Funding and interest vastly increased after 2012 when deep learning surpassed all previous AI techniques,and after 2017 with the transformer architecture. This led to the AI boom of the early 2020s, with companies, universities, and laboratories overwhelmingly based in the United States pioneering significant advances in artificial intelligence.


The growing use of artificial intelligence in the 21st century is influencing a societal and economic shift towards increased automation, data-driven decision-making, and the integration of AI systems into various economic sectors and areas of life, impacting job markets, healthcare, government, industry, and education. This raises questions about the long-term effects, ethical implications, and risks of AI, prompting discussions about regulatory policies to ensure the safety and benefits of the technology.


The various sub-fields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and support for robotics.[a] General intelligence—the ability to complete any task performable by a human on an at least equal level—is among the field's long-term goals.

To reach these goals, AI researchers have adapted and integrated a wide range of techniques, including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, operations research, and economics.[b] AI also draws upon psychology, linguistics, philosophy, neuroscience, and other fields.


Monday, April 22, 2024

 VERTUAL REALITY:

Virtual reality (VR) is a simulated experience that employs 3D near-eye displays and pose tracking to give the user an immersive feel of a virtual world. Applications of virtual reality include entertainment (particularly video games), education (such as medical, safety or military training) and business (such as virtual meetings). VR is one of the key technologies in the reality-virtuality continuum. As such, it is different from other digital visualization solutions, such as augmented virtuality and augmented reality.


Currently, standard virtual reality systems use either virtual reality headsets or multi-projected environments to generate some realistic images, sounds and other sensations that simulate a user's physical presence in a virtual environment. A person using virtual reality equipment is able to look around the artificial world, move around in it, and interact with virtual features or items. The effect is commonly created by VR headsets consisting of a head-mounted display with a small screen in front of the eyes, but can also be created through specially designed rooms with multiple large screens. Virtual reality typically incorporates auditory and video feedback, but may also allow other types of sensory and force feedback through haptic technology.





Modern virtual reality headset displays are based on technology developed for smartphones including: gyroscopes and motion sensors for tracking head, body, and hand positions; small HD screens for stereoscopic displays; and small, lightweight and fast computer processors. These components led to relative affordability for independent VR developers, and led to the 2012 Oculus Rift Kickstarter offering the first independently developed VR headset.



Independent production of VR images and video has increased alongside the development of affordable omnidirectional cameras, also known as 360-degree cameras or VR cameras, that have the ability to record 360 interactive photography, although at relatively low resolutions or in highly compressed formats for online streaming of 360 video. In contrast, photogrammetry is increasingly used to combine several high-resolution photographs for the creation of detailed 3D objects and environments in VR applications.


To create a feeling of immersion, special output devices are needed to display virtual worlds. Well-known formats include head-mounted displays or the CAVE. In order to convey a spatial impression, two images are generated and displayed from different perspectives (stereo projection). There are different technologies available to bring the respective image to the right eye. A distinction is made between active (e.g. shutter glasses) and passive technologies (e.g. polarizing filters or Infitec).


In order to improve the feeling of immersion, wearable multi-string cables offer haptics to complex geometries in virtual reality. These strings offer fine control of each finger joint to simulate the haptics involved in touching these virtual geometries.


Special input devices are required for interaction with the virtual world. Some of the most common input devices are motion controllers and optical tracking sensors. In some cases, wired gloves are used. Controllers typically use optical tracking systems (primarily infrared cameras) for location and navigation, so that the user can move freely without wiring. Some input devices provide the user with force feedback to the hands or other parts of the body, so that the user can orientate themselves in the three-dimensional world through haptics and sensor technology as a further sensory sensation and carry out realistic simulations. This allows for the viewer to have a sense of direction in the artificial landscape. Additional haptic feedback can be obtained from omnidirectional treadmills (with which walking in virtual space is controlled by real walking movements) and vibration gloves and suits.


Virtual reality cameras can be used to create VR photography using 360-degree panorama videos. 360-degree camera shots can be mixed with virtual elements to merge reality and fiction through special effects. Virtual reality cameras can be used as a steppingstone to make realistic holographic displays these cameras can be used to cover every angle of the needed experience.[69] VR cameras are available in various formats, with varying numbers of lenses installed in the camera.


Sunday, April 21, 2024

DATAFICATION:

Datafication: A brief introduction
Datafication is a technology trend turning the aspects (behaviors, processes, and activities ) of businesses, customers, and users into data with the help of technologies like Artificial Intelligence, Machine Learning, Robotics, and many other emerging stacks. This data can be tracked. processed, monitored, and analyzed to enhance the services and products an organization provides to its customers. 


In simple terms, hashtag#datafication refers to the collective tools, technologies, and processes that transform an organization into a data-driven enterprise. 

How Datafication benefits your business.
Datafication offers a huge bunch of advantages to organizations of all types. Whether you are into small business, mid-size or large enterprises, irrespective of that you can leverage this technology to maximize your business growth. 



1. Gain Insight Into Your Processes
Datafication transforms unstructured, incomprehensible information into useful insights. It allows you to gain insights into your processes and procedures. 
Datafication means you and your business will be better able to comprehend your strengths, weakness, potential, and possibilities as well as those of your company. It also provides you with a view into the impact and effects of your endeavors, so that you can evaluate what you're doing and how you're doing it. 

2. Facilitate Digital Transformation
Digital transformation is a buzzing word in this fast pace technological world. More than just a passing trend - it is becoming more and more crucial for all businesses and organizations that want to stay competitive and relevant in this technology era. 
You can only leverage the latest and high-end technologies if you have information in the form of data. Thus Datafication is the key to streamlining your processes technologically. once you have a clear understanding of your business and its goals, you can take innovative steps to move forward in a better way. 

3. Increased efficiency
Datafication enables you to have a better understanding of your business and provides you insights to make better use of your resources. This increases the productivity and efficiency of your business and streamlines your operations. 
What results from all of this? Obviously, higher earnings. Your company will become considerably more profitable as a result of increased efficiency and improved resource.












 EDGE COMPUTING:

Edge computing is a distributed computing framework that brings enterprise applications closer to data sources such as IoT devices or local edge servers. This proximity to data at its source can deliver strong business benefits, including faster insights, improved response times and better bandwidth availability.

The explosive growth and increasing computing power of IoT devices has resulted in unprecedented volumes of data. And data volumes continue to grow as 5G networks increase the number of connected mobile devices.

In the past, the promise of cloud and AI was to automate and speed up innovation by driving actionable insight from data. But the unprecedented scale and complexity of data that’s created by connected devices has outpaced network and infrastructure capabilities.

Sending all device-generated data to a centralized data center or to the cloud causes bandwidth and latency issues. Edge computing offers a more efficient alternative; data is processed and analyzed closer to the point where it's created. Because data does not traverse over a network to a cloud or data center to be processed, latency is reduced. Edge computing—and mobile edge computing on 5G networks—enables faster and more comprehensive data analysis, creating the opportunity for deeper insights, faster response times and improved customer experiences.





Devices at the edge: Harnessing the potential

From connected vehicles to intelligent bots on the factory floor, the amount of data from devices being generated in our world is higher than ever before, yet most of this IoT data is not used at all. For example, a McKinsey & Company study found that an offshore oil rig generates data from 30,000 sensors—but less than one percent of that data is currently used to make decisions.1


Edge computing harnesses growing in-device computing capability to provide deep insights and predictive analysis in near-real time. This increased analytics capability in edge devices can power innovation to improve quality and enhance value. It also raises important strategic questions: How do you manage the deployment of workloads that perform these types of actions in the presence of increased compute capacity? How can you use the embedded intelligence in devices to influence operational processes for your employees, your customers and your business more responsively? In order to extract the most value from all those devices, significant volumes of computation must move to the edge.

 CYBERSECURITY:

Cybersecurity is the practice of protecting systems, networks, and programs from digital attacks. These cyberattacks are usually aimed at accessing, changing, or destroying sensitive information; extorting money from users via ransomware; or interrupting normal business processes.


Implementing effective cybersecurity measures is particularly challenging today because there are more devices than people, and attackers are becoming more innovative.

What is cybersecurity all about?

A successful cybersecurity approach has multiple layers of protection spread across the computers, networks, programs, or data that one intends to keep safe. In an organization, the people, processes, and technology must all complement one another to create an effective defense from cyber attacks. A unified threat management system can automate integrations across select Cisco Security products and accelerate key security operations functions: detection, investigation, and remediation.



People

Users must understand and comply with basic data security principles like choosing strong passwords, being wary of attachments in email, and backing up data. Learn more about basic cybersecurity principles with these Top 10 Cyber Tips.


Processes

Organizations must have a framework for how they deal with both attempted and successful cyber attacks. One well-respected framework can guide you. It explains how you can identify attacks, protect systems, detect and respond to threats, and recover from successful attacks. Learn about the the NIST cybersecurity framework.


Technology

Technology is essential to giving organizations and individuals the computer security tools needed to protect themselves from cyber attacks. Three main entities must be protected: endpoint devices like computers, smart devices, and routers; networks; and the cloud. Common technology used to protect these entities include next-generation firewalls, DNS filtering, malware protection, antivirus software, and email security solutions.


Why is cybersecurity important?

In today’s connected world, everyone benefits from advanced cyberdefense programs. At an individual level, a cybersecurity attack can result in everything from identity theft, to extortion attempts, to the loss of important data like family photos. Everyone relies on critical infrastructure like power plants, hospitals, and financial service companies. Securing these and other organizations is essential to keeping our society functioning.


Everyone also benefits from the work of cyberthreat researchers, like the team of 250 threat researchers at Talos, who investigate new and emerging threats and cyber attack strategies. They reveal new vulnerabilities, educate the public on the importance of cybersecurity, and strengthen open source tools. Their work makes the Internet safer for everyone.


Types of cybersecurity threats

Phishing:

Phishing is the practice of sending fraudulent emails that resemble emails from reputable sources. The aim is to steal sensitive data like credit card numbers and login information. It’s the most common type of cyber attack. You can help protect yourself through education or a technology solution that filters malicious emails.

Social engineering:

Social engineering is a tactic that adversaries use to trick you into revealing sensitive information. They can solicit a monetary payment or gain access to your confidential data. Social engineering can be combined with any of the threats listed above to make you more likely to click on links, download malware, or trust a malicious source.

Ransomware:

Ransomware is a type of malicious software. It is designed to extort money by blocking access to files or the computer system until the ransom is paid. Paying the ransom does not guarantee that the files will be recovered or the system restored.





Saturday, April 20, 2024

 ROBOTICS:

Robotics, design, construction, and use of machines (robots) to perform tasks done traditionally by human beings. Robots are widely used in such industries as automobile manufacture to perform simple repetitive tasks, and in industries where work must be performed in environments hazardous to humans. Many aspects of robotics involve artificial intelligence; robots may be equipped with the equivalent of human senses such as vision, touch, and the ability to sense temperature. Some are even capable of simple decision making, and current robotics research is geared toward devising robots with a degree of self-sufficiency that will permit mobility and decision-making in an unstructured environment. Today’s industrial robots do not resemble human beings; a robot in human form is called an android.

Robot woman

robot chef

A robot chef cooks a meal at a robot-themed restaurant in Hefei, China.


Japanese roboticist Masahiro Mori proposed that as human likeness increases in an object’s design, so does one’s affinity for the object, giving rise to the phenomenon called the "uncanny valley." According to this theory, when the artificial likeness nears total accuracy, affinity drops dramatically and is replaced by a feeling of eeriness or uncanniness. Affinity then rises again when true human likeness—resembling a living person—is reached. This sudden decrease and increase caused by the feeling of uncanniness creates a “valley” in the level of affinity.


Robotics is the interdisciplinary study and practice of the design, construction, operation, and use of robots.



Within mechanical engineering, robotics is the design and construction of the physical structures of robots, while in computer science, robotics focuses on robotic automation algorithms. Other disciplines contributing to robotics include electrical, control, software, information, electronic, telecommunication, computer, mechatronic, and materials engineering.


The goal of most robotics is to design machines that can help and assist humans. Many robots are built to do jobs that are hazardous to people, such as finding survivors in unstable ruins, and exploring space, mines and shipwrecks. Others replace people in jobs that are boring, repetitive, or unpleasant, such as cleaning, monitoring, transporting, and assembling. Today, robotics is a rapidly growing field, as technological advances continue; researching, designing, and building new robots serve various practical purposes.

BLOCKCHAIN:

 A blockchain is a distributed ledger with growing lists of records (blocks) that are securely linked together via cryptographic hashes.[1][2][3][4] Each block contains a cryptographic hash of the previous block, a timestamp, and transaction data (generally represented as a Merkle tree, where data nodes are represented by leaves). Since each block contains information about the previous block, they effectively form a chain (compare linked list data structure), with each additional block linking to the ones before it. Consequently, blockchain transactions are irreversible in that, once they are recorded, the data in any given block cannot be altered retroactively without altering all subsequent blocks.


Blockchains are typically managed by a peer-to-peer (P2P) computer network for use as a public distributed ledger, where nodes collectively adhere to a consensus algorithm protocol to add and validate new transaction blocks. Although blockchain records are not unalterable, since blockchain forks are possible, blockchains may be considered secure by design and exemplify a distributed computing system with high Byzantine fault tolerance.[5]


A blockchain was created by a person (or group of people) using the name (or pseudonym) Satoshi Nakamoto in 2008 to serve as the public distributed ledger for bitcoin cryptocurrency transactions, based on previous work by Stuart Haber, W. Scott Stornetta, and Dave Bayer.[6] The implementation of the blockchain within bitcoin made it the first digital currency to solve the double-spending problem without the need for a trusted authority or central server. The bitcoin design has inspired other applications[3][2] and blockchains that are readable by the public and are widely used by cryptocurrencies. The blockchain may be considered a type of payment rail.[7]



Private blockchains have been proposed for business use. Computerworld called the marketing of such privatized blockchains without a proper security model "snake oil";[8] however, others have argued that permissioned blockchains, if carefully designed, may be more decentralized and therefore more secure in practice than permissionless ones.[4][9]


History


Bitcoin, Ethereum and Litecoin transactions per day (January 2011 – January 2021)

Cryptographer David Chaum first proposed a blockchain-like protocol in his 1982 dissertation "Computer Systems Established, Maintained, and Trusted by Mutually Suspicious Groups."[10] Further work on a cryptographically secured chain of blocks was described in 1991 by Stuart Haber and W. Scott Stornetta.[4][11] They wanted to implement a system wherein document timestamps could not be tampered with. In 1992, Haber, Stornetta, and Dave Bayer incorporated Merkle trees into the design, which improved its efficiency by allowing several document certificates to be collected into one block.[4][12] Under their company Surety, their document certificate hashes have been published in The New York Times every week since 1995.[13]


The first decentralized blockchain was conceptualized by a person (or group of people) known as Satoshi Nakamoto in 2008. Nakamoto improved the design in an important way using a Hashcash-like method to timestamp blocks without requiring them to be signed by a trusted party and introducing a difficulty parameter to stabilize the rate at which blocks are added to the chain.[4] The design was implemented the following year by Nakamoto as a core component of the cryptocurrency bitcoin, where it serves as the public ledger for all transactions on the network.[3]


In August 2014, the bitcoin blockchain file size, containing records of all transactions that have occurred on the network, reached 20 GB (gigabytes).[14] In January 2015, the size had grown to almost 30 GB, and from January 2016 to January 2017, the bitcoin blockchain grew from 50 GB to 100 GB in size. The ledger size had exceeded 200 GB by early 2020.[15]


The words block and chain were used separately in Satoshi Nakamoto's original paper, but were eventually popularized as a single word, blockchain, by 2016.[16]


According to Accenture, an application of the diffusion of innovations theory suggests that blockchains attained a 13.5% adoption rate within financial services in 2016, therefore reaching the early adopters' phase.[17] Industry trade groups joined to create the Global Blockchain Forum in 2016, an initiative of the Chamber of Digital Commerce.


In May 2018, Gartner found that only 1% of CIOs indicated any kind of blockchain adoption within their organisations, and only 8% of CIOs were in the short-term "planning or [looking at] active experimentation with blockchain".[18] For the year 2019 Gartner reported 5% of CIOs believed blockchain technology was a 'game-changer' for their business.[19]

Blocks hold batches of valid transactions that are hashed and encoded into a Merkle tree.[3] Each block includes the cryptographic hash of the prior block in the blockchain, linking the two. The linked blocks form a chain.[3] This iterative process confirms the integrity of the previous block, all the way back to the initial block, which is known as the genesis block (Block 0).[26][27] To assure the integrity of a block and the data contained in it, the block is usually digitally signed.[28]


Sometimes separate blocks can be produced concurrently, creating a temporary fork. In addition to a secure hash-based history, any blockchain has a specified algorithm for scoring different versions of the history so that one with a higher score can be selected over others. Blocks not selected for inclusion in the chain are called orphan blocks.[27] Peers supporting the database have different versions of the history from time to time. They keep only the highest-scoring version of the database known to them. Whenever a peer receives a higher-scoring version (usually the old version with a single new block added) they extend or overwrite their own database and retransmit the improvement to their peers. There is never an absolute guarantee that any particular entry will remain in the best version of history forever. Blockchains are typically built to add the score of new blocks onto old blocks and are given incentives to extend with new blocks rather than overwrite old blocks. Therefore, the probability of an entry becoming superseded decreases exponentially[29] as more blocks are built on top of it, eventually becoming very low.[3][30]: ch. 08 [31] For example, bitcoin uses a proof-of-work system, where the chain with the most cumulative proof-of-work is considered the valid one by the network. There are a number of methods that can be used to demonstrate a sufficient level of computation. Within a blockchain the computation is carried out redundantly rather than in the traditional segregated and parallel manner.

 MACHINE LEARNING:

Machine learning (ML) is a branch of artificial intelligence (AI) and computer science that focuses on the using data and algorithms to enable AI to imitate the way that humans learn, gradually improving its accuracy.




How does machine learning work?

UC Berkeley (link resides outside ibm.com) breaks out the learning system of a machine learning algorithm into three main parts.


A Decision Process: In general, machine learning algorithms are used to make a prediction or classification. Based on some input data, which can be labeled or unlabeled, your algorithm will produce an estimate about a pattern in the data.

An Error Function: An error function evaluates the prediction of the model. If there are known examples, an error function can make a comparison to assess the accuracy of the model.

A Model Optimization Process: If the model can fit better to the data points in the training set, then weights are adjusted to reduce the discrepancy between the known example and the model estimate. The algorithm will repeat this iterative “evaluate and optimize” process, updating weights autonomously until a threshold of accuracy has been met. 






Ebook

Learn and operate Presto

Explore the free O’Reilly ebook to learn how to get started with Presto, the open source SQL engine for data analytics.


Related content

Register for the white paper on AI governance


Machine learning versus deep learning versus neural networks

Since deep learning and machine learning tend to be used interchangeably, it’s worth noting the nuances between the two. Machine learning, deep learning, and neural networks are all sub-fields of artificial intelligence. However, neural networks is actually a sub-field of machine learning, and deep learning is a sub-field of neural networks.


The way in which deep learning and machine learning differ is in how each algorithm learns. "Deep" machine learning can use labeled datasets, also known as supervised learning, to inform its algorithm, but it doesn’t necessarily require a labeled dataset. The deep learning process can ingest unstructured data in its raw form (e.g., text or images), and it can automatically determine the set of features which distinguish different categories of data from one another. This eliminates some of the human intervention required and enables the use of large amounts of data. You can think of deep learning as "scalable machine learning" as Lex Fridman notes in this MIT lecture (link resides outside ibm.com).


Classical, or "non-deep," machine learning is more dependent on human intervention to learn. Human experts determine the set of features to understand the differences between data inputs, usually requiring more structured data to learn.


Neural networks, or artificial neural networks (ANNs), are comprised of node layers, containing an input layer, one or more hidden layers, and an output layer. Each node, or artificial neuron, connects to another and has an associated weight and threshold. If the output of any individual node is above the specified threshold value, that node is activated, sending data to the next layer of the network. Otherwise, no data is passed along to the next layer of the network by that node. The “deep” in deep learning is just referring to the number of layers in a neural network. A neural network that consists of more than three layers—which would be inclusive of the input and the output—can be considered a deep learning algorithm or a deep neural network. A neural network that only has three layers is just a basic neural network.


Deep learning and neural networks are credited with accelerating progress in areas such as computer vision, natural language processing, and speech recognition.


See the blog post “AI vs. Machine Learning vs. Deep Learning vs. Neural Networks: What’s the Difference?” for a closer look at how the different concepts relate.


Related content

Explore the watsonx.ai interactive demo


Download “Machine learning for Dummies”


- This link downloads a pdf

Explore Gen AI for developers

Thursday, April 18, 2024

 INTERNET OF THINGS:


The term IoT, or Internet of Things, refers to the collective network of connected devices and the technology that facilitates communication between devices and the cloud, as well as between the devices themselves. Thanks to the advent of inexpensive computer chips and high bandwidth telecommunication, we now have billions of devices  connected to the internet. This means everyday devices like toothbrushes, vacuums, cars, and machines can use sensors to collect data and respond intelligently to users.  

The Internet of Things integrates everyday “things” with the internet. Computer Engineers have been adding sensors and processors to everyday objects since the 90s. However, progress was initially slow because the chips were big and bulky. Low power computer chips called RFID tags were first used to track expensive equipment. As computing devices shrank in size, these chips also became smaller, faster, and smarter over time.

The cost of integrating computing power into small objects has now dropped considerably. For example, you can add connectivity with Alexa voice services capabilities to MCUs  with less than 1MB embedded RAM, such as for light switches. A whole industry has sprung up with a focus on filling our homes, businesses, and offices with IoT devices. These smart objects can automatically transmit data to and from the Internet. All these “invisible computing devices” and the technology associated with them are collectively referred to as the Internet of Things.



                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            

How does IoT work?

A typical IoT system works through the real-time collection and exchange of data. An IoT system has three components:


Smart devices

This is a device, like a television, security camera, or exercise equipment that has been given computing capabilities. It collects data from its environment, user inputs, or usage patterns and communicates data over the internet to and from its IoT application.


IoT application

An IoT application is a collection of services and software that integrates data received from various IoT devices. It uses machine learning or artificial intelligence (AI) technology to analyze this data and make informed decisions. These decisions are communicated back to the IoT device and the IoT device then responds intelligently to inputs. 


A graphical user interface

The IoT device or fleet of devices can be managed through a graphical user interface. Common examples include a mobile application or website that can be used to register and control smart devices.

Wednesday, April 17, 2024

 ANALYTIS:


Analytics is the systematic computational analysis of data or statistics.[1] It is used for the discovery, interpretation, and communication of meaningful patterns in data. It also entails applying data patterns toward effective decision-making. It can be valuable in areas rich with recorded information; analytics relies on the simultaneous application of statistics, computer programming, and operations research to quantify performance.


Organizations may apply analytics to business data to describe, predict, and improve business performance. Specifically, areas within analytics include descriptive analytics, diagnostic analytics, predictive analytics, prescriptive analytics, and cognitive analytics.[2] Analytics may apply to a variety of fields such as marketing, management, finance, online systems, information security, and software services. Since analytics can require extensive computation (see big data), the algorithms and software used for analytics harness the most current methods in computer science, statistics, and mathematics.[3] According to International Data Corporation, global spending on big data and business analytics (BDA) solutions is estimated to reach $215.7 billion in 2021.[4][5] As per Gartner, the overall analytic platforms software market grew by $25.5 billion in 2020.[6]


Analytics vs analysis:

This section may be confusing or unclear to readers. In particular, it is still not clear what the difference between analytics and analysis is. Please help clarify the section. There might be a discussion about this on the talk page. (March 2018) (Learn how and when to remove this template message)

Data analysis focuses on the process of examining past data through business understanding, data understanding, data preparation, modeling and evaluation, and deployment.[7] It is a subset of data analytics, which takes multiple data analysis processes to focus on why an event happened and what may happen in the future based on the previous data.[8][unreliable source?] Data analytics is used to formulate larger organizational decisions.

Data analytics is a multidisciplinary field. There is extensive use of computer skills, mathematics, statistics, the use of descriptive techniques and predictive models to gain valuable knowledge from data through analytics.[citation needed] There is increasing use of the term advanced analytics, typically used to describe the technical aspects of analytics, especially in the emerging fields such as the use of machine learning techniques like neural networks, decision trees, logistic regression, linear to multiple regression analysis, and classification to do predictive modeling.[9][7] It also includes unsupervised machine learning techniques like cluster analysis, Principal Component Analysis, segmentation profile analysis and association analysis.


 5G:

5G is the 5th generation mobile network. It is a new global wireless standard after 1G, 2G, 3G, and 4G networks. 5G enables a new kind of network that is designed to connect virtually everyone and everything together including machines, objects, and devices.
5G wireless technology is meant to deliver higher multi-Gbps peak data speeds, ultra low latency, more reliability, massive network capacity, increased availability, and a more uniform user experience to more users. Higher performance and improved efficiency empower new user experiences and connects new industries.


A: The previous generations of mobile networks are 1G, 2G, 3G, and 4G.

First generation - 1G
1980s: 1G delivered analog voice.

Second generation - 2G
Early 1990s: 2G introduced digital voice (e.g. CDMA- Code Division Multiple Access).

Third generation - 3G
Early 2000s: 3G brought mobile data (e.g. CDMA2000).

Fourth generation - 4G LTE
2010s: 4G LTE ushered in the era of mobile broadband.

1G, 2G, 3G, and 4G all led to 5G, which is designed to provide more connectivity than was ever available before.

5G is a unified, more capable air interface. It has been designed with an extended capacity to enable next-generation user experiences, empower new deployment models and deliver new services.

With high speeds, superior reliability and negligible latency, 5G will expand the mobile ecosystem into new realms. 5G will impact every industry, making safer transportation, remote healthcare, precision agriculture, digitized logistics — and more — a reality.




 5G is driving global growth.

• $13.1 Trillion dollars of global economic output

• 22.8 Million new jobs created

• $265B global 5G CAPEX and R&D annually over the next 15 years

Through a landmark 5G Economy study, we found that 5G’s full economic effect will likely be realized across the globe by 2035—supporting a wide range of industries and potentially enabling up to $13.1 trillion worth of goods and services.

This impact is much greater than previous network generations. The development requirements of the new 5G network are also expanding beyond the traditional mobile networking players to industries such as the automotive industry.

The study also revealed that the 5G value chain (including OEMs, operators, content creators, app developers, and consumers) could alone support up to 22.8 million jobs, or more than one job for every person in Beijing, China. And there are many emerging and new applications that will still be defined in the future. Only time will tell what the full “5G effect” on the economy is going to be.



  ARTIFICIAL INTELLIGENCE: Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer...