The source code for this blog is available on GitHub.


Words are the basic building blocks of language, acting as the smallest unit of meaning that can be understood independently. They are used to express thoughts, emotions, and convey information. In both human brains and computational systems like ChatGPT, words are essential for natural language understanding. The complexity of processing words depends on the language model's ability to recognize, learn, and generate diverse word forms and meanings. For the human brain, this process involves the auditory cortex and memory systems, while in artificial intelligence, it depends on the model's architecture, training data, and algorithms.


Phrases are groups of words that function together to convey meaning. They play a crucial role in natural language understanding, as they represent the first step in constructing more complex structures like sentences. In the human brain, the Wernicke's Area processes these phrases, creating a foundation for syntactic and semantic comprehension. In computer systems like ChatGPT, parsing and understanding phrases involves identifying the relationships between words and their meanings, requiring advanced algorithms and machine learning techniques to capture the nuances and context of human language.


Sentences are strings of words and phrases organized according to grammatical rules to express complete thoughts. They form the basis for communication, both written and spoken. The human brain processes sentences using multiple regions, including the Angular Gyrus and Prefrontal Cortex, which handle syntactic and semantic analysis. In computational systems like ChatGPT, natural language understanding requires sophisticated models to accurately interpret sentences, accounting for factors like word order, tense, and contextual meaning. This complexity is further exacerbated when dealing with longer or more ambiguous sentences.


Paragraphs are collections of related sentences that come together to form a coherent unit of thought. They facilitate communication by organizing information and ideas into a logical sequence, making it easier for the reader to understand the message. The human brain processes paragraphs by chunking and sequencing ideas, leveraging the Prefrontal Cortex's capabilities. In computer systems like ChatGPT, understanding paragraphs involves processing multiple sentences, identifying connections between them, and recognizing the overall theme or message. This level of natural language understanding demands even more computational power and advanced algorithms to accurately grasp the context and meaning.


A corpus is a large and structured set of texts, often used in computational linguistics and natural language processing to train and evaluate language models like ChatGPT. The number of connections in a large graph, such as a corpus, increases exponentially, making it computationally intractable to brute force the processing and understanding of such data. However, models like ChatGPT employ attention layers, which have been successful in compressing this vast space into a more manageable size. These attention mechanisms enable the model to focus on relevant parts of the input data, allowing it to efficiently capture context and relationships among words and phrases. This innovative approach significantly improves the model's ability to process and understand complex language structures, making it a powerful tool for natural language understanding.

Blogs, tutorials, articles and essays about math,
coding and natural language processing.

Cover Image for Text Segmentation

Text Segmentation

Normalization, Tokenization, Sentence Segmentation + Useful Methods

What does normalizing a text do? We have previously called this method .lower() to turn all of the words lowercase, so that strings like “the” and “The” both become “the”, so we don’t double count them.

Jake Batsuuri
Jake Batsuuri
25 min read

More Stories

Cover Image for Inputting & PreProcessing Text

Inputting & PreProcessing Text

Input Methods, String & Unicode, Regular Expression Use Cases

NLTK has preprocessed texts. But we can also import and process our own texts. Importing from __future__ import division import nltk, re, pprint To Import a Book as a Txt Install urlopen: !pip install urlopen And:

Jake Batsuuri
Jake Batsuuri
22 min read
Cover Image for What are Context Free Languages?

What are Context Free Languages?

Grammars, Derivation, Expressiveness, Chomsky Hierarchy

Previously, we talked about how languages are studied using the notion of a formal language. Formal language is a mathematical construction that uses sets to describe a language and understand its properties.

Jake Batsuuri
Jake Batsuuri
11 min read