Thursday 22 October 2020

1. Introduction to Artificial Intelligence



Evolution of AI

In the past few years, AI evolved into a powerful tool that enables machines to think and act like humans. Moreover, it has garnered focus from tech companies around the world and is considered as the next significant technological shift after the evolution in mobile and cloud platforms. Some even call it the fourth industrial revolution. Forbes states, “By 2020, businesses that use AI and related technologies like machine learning and deep learning to uncover new business insights will take $1.2 trillion each year from competitors that don’t employ these technologies.”

This article gives you an overview of the evolution of AI and sets a foundational understanding of important milestones that led the path for AI surge.

Artificial Intelligence (AI)

According to the Merriam Webster dictionary, “Artificial intelligence is a branch of computer science dealing with the simulation of intelligent behavior in computers.” When a machine can make intelligent decisions, it can be referred to as being intelligent- artificially. We mostly see people using the terms of machine learning, deep learning, and AI synonymous. However, deep learning is a subset of machine learning, and machine learning is a subset of AI. 


A Venn diagram showing how deep learning is a kind of representation learning, which is, in turn, a kind of machine learning, which is used for many but not all approaches to AI. Each section of the Venn diagram includes an example of AI.







2.When did Artificial Intelligence surge begin??

 When did the AI surge begin?

Back in the 1800s, AI was limited in myths, fiction, and speculation. Classical philosophers envisioned machines integrated into human beings. However, they were just portrayed in fiction work like Mary Shelly’s “Frankenstein” then. The real initiation in AI began in 1956. The seed that led towards an AI future was a workshop in Darthmod College, attendees of which were claimed as AI leaders for decades to come. 

The AI surge began with six major design goals as follows:

  1. Teach machines to reason in accordance to perform sophisticated mental tasks like playing chess, proving mathematical theorems, and others.
  2. Knowledge representation for machines to interact with the real world as humans do — machines needed to be able to identify objects, people, and languages. Programming language Lisp was developed for this very purpose.
  3. Teach machines to plan and navigate around the world we live in. With this, machines could autonomously move around by navigating themselves.
  4. Enable machines to process natural language so that they can understand language, conversations and the context of speech.
  5. Train machines to perceive the way humans do- touch, feel, sight, hearing, and taste.
  6. General Intelligence that included emotional intelligence, intuition, and creativity.

All these goals set the foundation to build a machine with human capabilities. Millions of dollars were invested in bringing their vision to life. However, soon, the US government realized the absence of powerful computing technologies needed to implement AI. The funds were withdrawn, and the journey took the first halt in the late 80s.

The need for a massive amount of data and enormous computing power disrupted the progress in the 80s. The 21st century, however, brought the concept quickly back to life proving Moore’s law. The heavy processing power that tiny silicons hold today has made AI feasible in the current context, also enabling to build improved algorithms. 

There have been four successive catalysts in the AI rebirth and revolution:

  1. The democratization of AI knowledge that began when world-class research contents were made available to the masses- starting with MOOCs from Stanford University with Andrew NG and Intro to ML by Sebastian Thurn and Katie Malone from Udacity.
  2. Data and Computing Power (cloud and GPU) that made AI accessible to the masses without enormous upfront investment or being a mega-corporation.
  3. Even with access to data and computing power, you had to be an AI specialist to leverage it. However, in 2015, there was a proliferation of new tools and frameworks that made exploring and operationalizing production-level AI feasible to the masses. You can now build on the backs of giants like Google  (Tensorflow), and Facebook( PyTorch).  Numerous organizations have been founded with the democratization of AI like FastAI and OpenAI.
  4. In the past two years, AI as a service has taken this a step further, enabling easier prototyping, exploration, and even building sophisticated and intelligent use-case specific AI’s in the product. There are platforms like Azure AI, AWS AI, Google Cloud AI, IBM Cloud AI, and many more that provides AI as a Service.



3. History of Artificial Intelligence

 The History of Artificial Intelligence

Although the concept of artificial intelligence has been around for centuries it wasn’t until the 1950’s where the true possibility of it was explored. A generation of scientists, mathematicians and philosophers all had the concept of AI but it wasn’t until one British Polymath, Alan Turing, suggested that if humans use available information, as well as reason, to solve problems and make decisions — then why can’t machines do the same thing? Although Turing outlined machines and how to test their intelligence in his paper Computing Machinery and Intelligence in 1950 — his findings did not advance.

The main halt in growth was the problem of computers. Before any more growth could happen they needed to change fundamentally — computers could execute commands, but they could not store them. Funding was also an issue up until 1974.

By 1974 computers flourished. They were now faster, more affordable and able to store more information. Early demonstrations such as Allen Newell and Herbert Simon’s General Problem Solver and Joseph Weizenbaum’s ELIZA, which was funded by Research and Development Corporation (RAND), showed promise toward the goals of problem-solving and the interpretations of spoken language in machines, and yet there was still a long way to go before machines could think abstractly, self-recognize and achieve natural language processing.


In the 1980s AI research fired back up with an expansion of funds and algorithmic tools. John Hopfield and David Rumelhart popularized “deep learning” techniques which allowed computers to learn using experience. On the other hand, Edward Feigenbaum introduced expert systems which mimicked decision making processes of a human expert. But it was not until the 2000’s that many of the landmark goals were achieved and AI thrived despite lack of government funds and public attention.



Timeline of AI





Wednesday 21 October 2020

4. Today's Artificial Intelligence

Today’s AI Research



In today’s day, AI research is constant and continues to grow. Over the last five years AI research has grown by 12.9% annually worldwide, according to technology writer Alice Bonasio.

Within the next four years China is predicted to become the biggest global source of artificial intelligence, taking over the United States’ second lead in 2004 — and it is quickly closing in on Europe’s number one spot.

Europe is the largest and most diverse region with high levels on international collaboration within the field of artificial intelligence research. After China and the United States, India is the third largest country in terms of AI research output.

When it comes to specifics, there are seven distinct research areas with limitations on AI ethics research.

· Search and Optimization

· Fuzzy Systems

· Natural Language Processing and Knowledge Representation

· Computer Vision

· Machine Learning and Probabilistic Reasoning

· Planning and Decision Making

· Neural Networks

Neural networks, machine learning, and probabilistic reasoning and computer vision show the largest volume of research growth.


5. Present effects of AI

 Present Effects of AI

There is so much that artificial intelligence is being used for and so much more potential that it is hard to picture our future without help it — especially when it comes to business.

From workflow management tools to trend predictions and even the way brands purchase ads, machine learning technologies are driving increases in productivity like never before.

Artificial Intelligence can collect and organize large amounts of information to make insights and guesses that are beyond the human capabilities of manual processing. It also increases organizational efficiencies yet reduces the likelihood of a mistake and detected irregular patterns, like spam and fraud, to warn business in real time about a suspicious activity — among many other things. AI is said to reduce costs in many ways — for example, “training” machines to handle incoming customer support calls and replacing many jobs in that way. It’s also known that if your business doesn’t use AI it’s probably falling behind competitively.

AI has become so important and advanced that a Japanese Venture Capital firm made history by being the first company to nominate an AI Board Member for its capabilities to predict market trends faster than a human.

Artificial intelligence will be and is becoming a commonplace in every aspect of life — like the future of self-driving cars, more accurate weather predictions, or earlier health diagnosis’, just to name a few.

6. Smarter Future

A Smarter Future



It has been said that we are on the cusp of the Fourth Industrial Revolution — a revolution that is completely different than the previous three. From steam and water power, electricity and assembly lines, and computerization to now challenging the ideas about what it means to be human.

According to Forbes, the Fourth Industrial Revolution “describes the exponential changes to the way we live, work and relate to one another due to the adoption of cyber-physical systems, Internet of Things and the Internet of Systems.”



Smarter technologies in our factories and workplaces and connected machines that will interact, visualize the entire production chain and make decisions autonomously is just a couple of the ways that the Industrial Revolution will cause advancements in business. One of the greatest promises that the Fourth Industrial Revolution brings is the potential to improve the quality of life for the world’s population and raise income levels. Our workplaces and organizations are becoming “smarter” and more efficient as machines, humans are starting to work together, and we use connected devices to enhance our supply chains and warehouses.



According to Gigabit Magazine, there are seven stages that will create a smarter world with AI:

1. Rule-Based Systems — domestic applications and RPA software that surrounds us everywhere, every day.

2. Context Awareness and Retention — algorithms that build up a body of information that is used and updated by machines. For example, chatbots and roboadvisors.

3. Domain Specific Expertise — machines that can develop expertise in a specific field that extends beyond the capability of humans because of all the informational access they can quickly get to, to reach a decision.

4. Reasoning Machines — these algorithms have a “theory of mind,” some ability to attribute mental states to themselves and others. They have a sense of beliefs, intentions, knowledge, and are aware of how their own logic works. Hence, they have the capacity to reason, negotiate, and interact with humans and other machines.

5. Self Aware Systems — the goal for those working in the AI field is to create and develop systems with human-like intelligence. There is no such evidence of that today but some say that there will be in as little as five years while others believe we may never achieve that level of intelligence.

6. Artificial Superintelligence — developing AI algorithms that are capable of outperforming the smartest of humans in every single domain.

7. Singularity and Transcendence — a development path enabled by ASI that could lead to a massive expansion of human capability, where one day we might be sufficiently augmented and enhanced such that humans could connect their brains to each other and to a future successor of the current internet.

Tuesday 20 October 2020

7. Envisioning AI in the Next 20 Years

 Envisioning AI in the Next 20 Years

2020–2025

· Between 70% and 90% of all initial customer interactions are likely to be conducted or managed by AI

· Product development in a range of sectors from fashion items and consumer goods to manufacturing equipment could increasingly be undertaken and tested by AI

· Individuals will be able to define and design the personalised products and services they require in sectors ranging from travel through to banking, savings, and insurance

· The technology is likely to be deployed across all government agencies and legal systems — with only the most complex cases requiring a human judge and full court proceedings

· Autonomous vehicles will start appearing in many cities across the world

· Our intelligent assistants could now be managing large parts of our lives from travel planning through to compiling the information we need prior to a meeting.

2026–2035

· Globally approved, smart crypto tokens may be accepted alongside fiat currencies as we edge towards a single global medium of exchange

· Artificial intelligence is likely to have penetrated every commercial sector

· The evolution of AI could see the emergence of a wide range of fully automated DAO businesses including banks, travel agents, and insurance companies

· Scientific breakthroughs could enable us to develop artificial animal and ecosystem intelligence

· The emergence of self-aware and self-replicating software systems and robots

· There is a reasonable possibility of achieving Artificial General Intelligence

· There is a small chance of creating Artificial Super Intelligence

· The singularity remains an unlikely possibility in this timeframe.

1. Introduction to Artificial Intelligence

Evolution of AI In the past few years, AI evolved into a powerful tool that enables machines to think and act like humans. Moreover, i...