Artificial Intelligence (AI) is a prominent talking point in academia due to fear surrounding its data collecting methods, human-like writing functions, and the area of opportunity it opens for misuse. When individuals and instructors see “AI”, the tool ChatGPT comes to mind, as it is arguably one of the most popular generative AI programs. However, an issue arises when individuals simply equate AI to ChatGPT, as it creates a narrow scope and understanding of the facets and use of AI itself to only the generative AI subset. This dismisses multiple other branches of AI and creates misunderstandings about AI use.
So, just what is AI? Artificial Intelligence, or AI, is the use of computer systems that are programmed and made capable of performing and operating tasks that require human intelligence. It encompasses various facets and is heavily dependent on large amounts of data for training and application. AI is commonly equated to a popular generative AI tool: ChatGPT. This is due to the demand and fascination with generative AI, a type of AI that is able to create “new” content based on user prompts, input, and data. However, generative AI is merely a single branch of the larger AI tree, and there is much more that AI offers. In an effort to increase awareness of AI’s capabilities and encourage individuals to harness its operations as a tool. This complicated subject can be explained by looking at the six main branches of AI and how each is applied and used.
1. Deep Learning
This branch relies on multilayered “deep” artificial neural networks to function, involving large sets of data to train it through a process of classification. It relies on a three-component system: input layer, hidden layer, and output layer. When a piece of data or an object is input into the machine, it is processed through these network layers that compute mathematical functions to determine an output or solution. Deep learning functions similarly to the human brain when data is processed, as it mimics the way that humans read data, based on pattern recognition. It “allow[s] a computer system to ‘learn’ from experience, rather than rely wholly on pre-programmed knowledge” 1 and automates tasks that typically require human assistance or intelligence.
Even though deep learning is a critical component of many emerging technologies, it isn’t a fundamentally new form of artificial intelligence. Several common use cases are found in everyday products such as virtual assistants, voice-activated searches, translation tools, automatic facial recognition. This component of AI continues to be used in many fields such as business, science, and engineering to develop efficient processing of data and improve automation of applications, such as the detection in self-driving cars and heavy-duty factory machines that are able to detect people or objects and stop.
The introduction of the subset generative adversarial networks (GANs) led to a change in the approach to deep learning. GANs are the basis for generative AI, an unsupervised learning algorithm that generates new data based on input and large amounts of data. Drawing from this knowledge, the technology is able to produce a variety of content including graphics, audio, video, and text.
2. Expert Systems
Operating on the aspects of reasoning and knowledge to help solve problems, this interactive branch of AI relies on user input to produce solutions and feedback. Expert systems are a part of AI designed to help solve complex issues by drawing from knowledge stored in its knowledge base. When a user inputs data into the system, it goes through a rules engine that contains data placed in the knowledge base from an expert. From there, the data circulates through the rules engine again and back to the user to produce advice or a solution. This system is only as capable as its designer and the experts who input information into its knowledge base. There are four elements encompassed by this system: knowledge base, rules set, inference engine, and user interface.2
Knowledge base defines all the facts that the system has about a specific subject. It draws its knowledge from informational systems such as experts and databases, using this information to influence the result produced for the user. Rule sets, or rules engines, are the set of rules that guide how information is evaluated and the approach used to answer user input data. The inference engine relies on the knowledge base and rule sets to interpret and evaluate information and determine what facts to relay to the user. The user interface is simply the interactive segment where the user is able to input and receive data from the machine.
One of the early expert systems that utilized AI is MYCIN3, a computer program used to diagnose and treat human infections. MYCIN relied on a database with known infections and their symptoms, and a rule-based system that directed it towards identifying information and making diagnosis decisions. Using a question and answer dialogue, the system would navigate and match answers to the symptoms and infections, creating a list of possible diagnoses and recommending treatment and explanation of the reasoning behind its solutions.
3. Fuzzy Logic
The term fuzzy logic stems from the reference to solutions that are not clear but vague: fuzzy. Using “if-then” statements, fuzzy logic operates on an analytical proof theory4 by measuring a statement’s degree of probability and percent chance. Although it is very similar to Boolean logic, fuzzy logic doesn’t have an absolute true or false value; 0 or 1. It analyzes information through a process that functions off values between 0 and 1, as a value can be deemed partially true and partially false. This results in a percentage-level analysis that is very flexible but leads the way to instances of inaccuracies or unexpected values.
Its framework revolves around four aspects that help in decision-making: rule base, fuzzification, inference engine, and defuzzification.
The rule base states the guidelines, acting as a governor for the if-then statements, and seeks to reduce the number of uncertain data sets and inaccurate outcomes. Fuzzification converts the input data and crisp numbers into “fuzzy sets”, which are the vague degrees that range from 0 to 1. The inference engine processes and evaluates all the data through a logistical matching. Values are matched according to implemented rules and conditions, and depending on the input field, are fired to form actions that provide service for each device. Defuzzification hinges on the fuzzy sets collected from the inference engine. These sets are converted into a crisp value to reduce output error.
When using fuzzy logic to analyze a piece of data, it applies and measures the degree of probability and determines if the given value is accurate or correct. For example, most air conditioning units have an automatic function where when the temperature hits a certain percentage or degree the heat or cooling function is activated. The if-then statement is activated: if the temperature is at X percent (degree) and is dropping at Y percent, activate heat; if the temperature is at X percent (degree) and is increasing at Y percent, activate cooling.5
4. Machine Learning
This system produces an informational algorithm and shapes its behavior around empirical data, by identifying data sets or common occurrences. Its functions are not to be confused with deep learning, as machine learning heavily relies on human intervention and the guidance of its programmer. Three major algorithms encompassed by machine learning are supervised learning, unsupervised learning, and reinforcement learning.6
Supervised learning is based on known outcomes in a dataset. When data is fed into the machine, it analyses the pieces and labels, and classifies the information, resulting in a prediction known as the output. Contrast to supervised learning, unsupervised learning relies on unlabeled datasets to produce its results. It uses predictive modeling to identify and recognize patterns, such as similarities and differences to categorize data points value and relevant association. Reinforcement learning falls back to using unlabeled data sets, where the machine is tasked with producing results based on knowledge from trial and error. Similar to the learning process that humans use to learn in their early years, the algorithm revolves around a reward and punishment approach, as actions that help progress towards a goal are reinforced while actions that delay or detract from progress are disregarded.
For example, a spam filter based on machine learning operates on an algorithm based on identifying common word occurrences used in phishing emails. It identifies the occurrence of the specific word and multiplies it by the number of times to determine its likelihood of being spam. If the pattern identified by the machine is similar to the data drawn from previous spam and reports of spam, it will categorize the suspected phishing attempt as spam. If the user reports the spam, it is reinforced in its method, however, if the user accepts the blocked content, it discourages that approach.
5. Natural Language Processing (NLP)
This branch relied on large language models (LLMs), a deep learning model that can summarize, analyze, recognize, and generate text. It is trained on a large data sets, ranging from millions to trillions of points7 that include human language, which makes it easy for humans to “communicate” with machines but also leaves room for language misuse and bias. NLP detects the language from the input by processing it, then “recognizes” components of the language by extracting from its data set, resulting in an output. There are several uses and benefits of NLP that are used in many industries and applications, with the main being its ability to automate repetitive tasks, improve insights into data, enhanced search, and generate content.8
ChatGPT is a popular NLP tool, as it performs quick analysis of a user’s input and generates human-like answers. This chatbot is designed to answer questions and generate content in a way that mimics human dialogue. It can provide information that can be used for a variety of purposes, such as brainstorming, research starters, and data analysis, due to the large dataset of user input it draws from. However, it is far from a perfect system due to that same heavy reliance on user input, as it leaves way for instances of users misleading the system to produce incorrect information. Often times when it produces a data list of sources “the references often do not correspond to the text created or are fake citations made of a mix of real publication information from multiple sources,” leading to misinformation.9 This reliance also draws into question the ethical aspects of consuming and producing content based on user data. Concerns of violating a user’s privacy has contributed to companies cracking down on how their internal data is handled, creating policies and security tactics such as disabling access to several popular generative AI tools. Some companies have even taken to further compartmentalizing their information to protect their employees and confidential data.
6. Robotics
These robots, powered by AI, contain a variety of sensors that analyze and identify data in real-time leading to immediate reactions.10 This equipment helps them navigate unpredictable situations and still be able to complete their tasks. Three of the most common types of AI-powered robots are autonomous mobile robots (AMRs), articulated robots (robotic arms), and cobots.11
Autonomous Mobile Robots (AMRs)
These robots are able to gather and analyze information quickly via 3D cameras, sensors, and advanced mapping technology. As they collect information and data, they are able to make inferences and deliver outcomes or solutions based on their environment and tasks. Due to their ability to adapt in multiple environments and changes, they are valuable robots that are used in various industries and applications. A popular example of AMRs in action is Roomba, a cleaning robot that uses its built-in sensors to identify and pick up dirt while navigating the layout of a room.
Articulated Robots (Robotic Arms)
These are simple industrial robots that have arms composed of rotary joints or axes whose mobility provides them with an extensive range of motion and the ability to perform a wide variety of tasks. Articulated robots have a heavy use in the manufacturing field due to their reliability, enhanced precision12, and their tendency to not deviate from their programmed path. An example of an articulated robot’s function is assembly and packaging, as these machines, like a human arm and hand, can precisely pick up objects, place them down, and use materials to seal boxes together.
Cobots
Many fear that robots will take over their jobs,13 but these collaborative or companion robots are smaller than traditional industrial robots and designed to increase productivity by working alongside humans in close proximity. They are compact machines that can learn new operations due to their simple programming. Similar to other industrial robots, cobots are consistent and precise, making them excellent additions to work that requires monotonous tasks such as replenishing stocks or assembly line work.
AI’s rapid development has influenced and contributed to many technological and social practices. Its expansion into various fields and its variety of processes that extend past the ability to simply generate content has improved automated systems and fluidity. However, the question of its capabilities, ethics of its practice, and how to harness it are still being contemplated and tested by academia, businesses, and fields. Although this dilemma will continue to have prominence, AI should not be a tool that is shied away from, rather, it should be viewed as an important aspect of technology to learn about and utilize.
- Tamara Dunn, Deep learning. (Salem Press Encyclopedia of Science, 2024). ↵
- Elizabeth Mohn, Expert System (artificial intelligence). (Salem Press Encyclopedia of Science, 2022). ↵
- Edward H. Shortliffe, MYCIN: A Knowledge-based Computer Program Applied to Infectious Dieases. (National Institutes of Health). https://pmc.ncbi.nlm.nih.gov/articles/PMC2464549/ ↵
- Cintula, Petr, Christian G. Fermüller, and Carles Noguera, “Fuzzy Logic”, The Stanford Encyclopedia of Philosophy (Summer 2023). https://plato.stanford.edu/entries/logic-fuzzy/ ↵
- Richard Sheposh, Fuzzy Logic. (Salem Press Encyclopedia of Science, 2023). ↵
- Randa Tantawi, Machine learning. (Salem Press Encyclopedia of Science, 2024). ↵
- GAO-24-106946, Artificial Intelligence: Generative AI Technologies and Their Commercial
Applications. (U.S. Government Accountability Office, 2024). https://www.gao.gov/assets/gao-24-106946.pdf ↵ - Cole Stryker and Jim Holdsworth, What is NLP (natural language processing)? (IBM, 2024). https://www.ibm.com/topics/natural-language-processing ↵
- University of Pittsburgh, What is Generative AI?. (University Center for Teaching and Learning, 2024). https://teaching.pitt.edu/resources/what-is-generative-ai/ ↵
- Introduction to AI Applications in Robotics. (University of San Diego Online Degrees, 2023). https://onlinedegrees.sandiego.edu/application-of-ai-in-robotics/ ↵
- Artificial Intelligence (AI) and Robotics. (intel, 2024). https://www.intel.com/content/www/us/en/robotics/artificial-intelligence-robotics.html ↵
- Ravi Rao, What are Articulated Robots? Anatomy, Control System, Advantages, Selection Criteria, and Applications. (Wevolver, 2023). https://www.wevolver.com/article/articulated-robots ↵
- Danny Weller, Artificial Intelligence and Collaborative Robots: The Workers of the Future? (Autonomous Manufacturing, 2024). https://amfg.ai/2024/01/24/artificial-intelligence-and-collaborative-robots-the-workers-of-the-future/ ↵