Making predictions for the technology sector in any given year is a fool’s errand. Breakthroughs happen unexpectedly—at least unexpected by the general public, eagerly awaited by the technophiles. Concepts that were thought of as science fiction suddenly become headlines, then buzzwords, then common, sometimes all in one year—looking at you, OpenAI and ChatGPT.
But that is from the outside. Insiders know there is a lot of work being done behind the scenes on projects that are, indeed, science and fiction at the same time. Getting the science right and removing the fiction is the technologists’ domain.
There are some technologies that dominated the discussion in the recent past. Most popular has been artificial intelligence, the AI that everyone talked about in 2023. Other buzzwords are AR (augmented reality) and VR (virtual reality) and blockchain. The network infrastructure will make the other technologies possible and available.
Give Me an A
Take the advance of AI as an example. The current popular version, the LLM (large language model) like ChatGPT, can be either awesome or frustrating—sometimes both at the same time. Generative pretrained transformers, commonly known as GPT, are a family of neural network models that uses the transformer architecture and is an advancement powering generative AI applications such as ChatGPT. GPT models give applications the ability to create human-like text and content (images, music, and more), and answer questions in a conversational manner. Organizations across industries are using GPT models and generative AI for Q&A bots, text summarization, content generation, and search.
The breakthrough represented by GPT models is an important milestone in the adoption of ML (machine learning) because the technology can be used to automate several operations from translating foreign language documents to writing blog posts, designing websites, doing animation, writing computer code, researching complex topics, and even composing poems or songs.
According to AWS (Amazon Web Services), the value of these models lies in their speed and the scale at which they can operate. For example, you might need several hours to research, write, and edit an article on digital twins, a GPT model can produce one in seconds. GPT models have pushed the research in AI to achieving artificial general intelligence so machines can help organizations reach new levels of productivity and reinvent their applications and customer experiences.
While GPT models are called “artificial intelligence,” this is a commercial description. GPT models are neural network-based language prediction models built on the transformer architecture. They analyze natural language queries, known as prompts, and predict the best possible response based on their understanding of language.
GPT models rely on the knowledge they gain after they’re trained with hundreds of billions of parameters on massive language datasets. They can take input context into account and dynamically interpret different parts of the input, making them capable of generating long responses, not just the next word in a sequence. When asked to generate a piece of Shakespeare-inspired content, a GPT model does so by remembering and reconstructing new phrases and entire sentences with a similar literary style.
The transformer neural network architecture uses self-attention mechanisms to focus on different parts of the input text during each processing step. A transformer model captures more context and improves performance on NLP (natural language processing) tasks.
Failures in AI-Land
News organizations tested ChatGPT and found it could do wonderful writing, in a variety of styles, and filled with facts pulled from the internet. That last part should give you an idea of why it can be frustrating: Not all “facts” on the internet are true. But an AI writer doesn’t know that, even if it’s programmed carefully. It goes out and finds data, massages it into a written style as requested, and presents it. At this level, it does not have a way to determine the truth or fiction of that data.
Yes, that is like a human cruising the internet, which is why there are so many conspiracy theories with so many followers on the web. While search engines present a list of sites that can, might, should answer the search topic/question, AI acts as the human, collecting those lists, going through the data presented there, and generating what it considers the information asked of it.
A University of Cincinnati professor of philosophy and psychology, Tony Chemero, notes “LLMs generate impressive text, but often make things up whole cloth. They learn to produce grammatical sentences, but require much, much more training than humans get. They don’t know what the things they say mean.”
The people who make LLMs call it “hallucinating” when they make things up; although Chemero says, “It would be better to call it BS,” because LLMs just create sentences by repeatedly adding the most statistically likely next word—and they don’t know or care if what they say is true. The main takeaway is that LLMs are not intelligent in the way that humans are because they “don’t give a damn,” Chemero says, adding “Things matter to us humans. We are committed to our survival. We care about the world we live in.”
Small Worlds
If you take the AI concept and narrow the focus of the database it uses, some practical uses become evident. The McKinsey Global Institute looked at several industries that could benefit from current levels of AI. One difference, however, is that LLM-based AI isn’t necessarily the best version to use. McKinsey feels generative AI is better. Generative AI (gen-AI) works by using an ML model to learn the patterns and relationships in a dataset of human-created content. It then uses the learned patterns to generate new content.
The most common way to train a generative AI model is to use supervised learning—the model is given a set of human-created content and corresponding labels. It then learns to generate content that is similar to the human-created content and labeled with the same labels.
Real estate investors, as an example, have mountains of both proprietary and third-party data about properties, communities, tenants, and the market itself, for example. This information can be used to customize existing gen-AI tools so they can perform real estate–specific tasks, such as identifying opportunities for investors at lightning speed, revolutionizing building and interior design, creating marketing materials, and facilitating customer journeys while opening up new revenue streams.
Before there was ChatGPT, they note, AI-assisted forecasts altered how investment professionals think about the future, and dynamic pricing models have changed how several industries charge for goods and services. Gen AI represents a fresh chance for the real estate industry to learn from its past and transform itself into an industry at technology’s cutting edge. Based on work by the McKinsey Global Institute, AI could generate $110 billion to $180 billion or more in value for the real estate industry.
For all the hype AI has received, many real estate organizations are finding it difficult to implement and scale use cases, and thus have not yet seen the promised value creation. This is not surprising: deriving competitive advantage from AI is not as simple as just deploying one of the major foundational models, and many things have to go right in an organization to make the most of the opportunity. This is also true of other industries.
Supply-chain management solutions based on artificial intelligence are expected to be potent instruments to help organizations tackle their challenges. An integrated, end-to-end approach can address the opportunities and constraints of all business functions, from procurement to sales. Gen-AI’s ability to analyze huge volumes of data, understand relationships, provide visibility into operations, and support better decision making makes it a game changer.
Getting the most out of these solutions is not simply a matter of technology, however; companies must take organizational steps to capture the full value from gen-AI. The good news is AI-based solutions are available and accessible to help companies achieve next-level performance in supply-chain management. Solution features include demand-forecasting models, end-to-end transparency, integrated business planning, dynamic planning optimization, and automation of the physical flow—all of which build on prediction models and correlation analysis to better understand causes and effects in supply chains.
Successfully implementing gen-AI-enabled supply-chain management has enabled early adopters to improve logistics costs by 15%, inventory levels by 35%, and service levels by 65%, compared with slower-moving competitors. New offerings include demand planning (which has been changed by integrating machine learning and harnessing new sources of data); realtime inventory management, and dynamic margin optimization of end-to-end chains with digital twins.
Which AI Solution?
Selecting the right solution is critical. To manage the complexity of today’s supply chain, new solutions need to be smartly designed and adapted to specific business cases. They also need to fit well with the organization’s strategy. This alignment enables companies to tackle key decision-making points with an adequate level of insight while avoiding unnecessary complexity. However, implementation can require significant time and investments in both technology and people—meaning the stakes are high to get it right.
Today, investment decisions are often informed through individual analysis of bespoke data pulls across sources. An investor interested in warehouses, for example, typically starts by performing a macroanalysis of markets that have attractive factors such as ports, airport locations, and high e-commerce volume. Then, they perform more granular analysis to locate areas of interest, pulling building information from local brokers or digital tools. As part of the decision-making process, the investor conducts discrete analyses to figure out how their investment hypotheses have panned out in the past.
With a gen-AI tool that’s fine-tuned using internal and third-party data, an investor can simply ask, “What are the top 25 warehouse properties up for sale that I should invest in?” Or, “Which malls are most likely to thrive in the future?” The tool can sort through the unstructured data—both internal (such as the performance of a company’s existing properties and the lease terms related to this performance) and third party (such as the U.S. Census and publicly recorded, comparable sales). This multifaceted analysis can be overlaid on a list of properties for sale to identify and prioritize specific assets that are worth manual investigation.
In the gen-AI future, those with access to and control over unique, informative data will be able to generate insights that others cannot. Companies can start by thinking about what data they need—as well as what proprietary data is available but not currently being collected.
It is essential not only to have the best data set but also to have it engineered the right way with the right data governance. A conversational AI tool that has been trained on a building’s past maintenance requests can efficiently respond to resident complaints. A tool trained on a real estate portfolio’s net-operating-income data can provide answers about performance that could be useful for investment decisions and for reporting to investors and internal company divisions. IoT (Internet of Things) sensors and computer vision applications in office buildings, for example, can provide anonymized insights into how tenants use spaces, creating nuanced views of the built environment. Tenant apps and dashboards are not merely interaction channels; they themselves can become data sources. What kind of amenity space a residential tenant books, what stores a shopper in a mall browses, or what services an office tenant needs to produce an event are all valuable pieces of data that can be harnessed and structured.
In Part 2, we’ll discuss two more “A” technologies, AR (augmented reality) and its companion VR (virtual reality). The category known as autonomous vehicles will round out the “A” coverage, and the current “Big B,” blockchain, will both be found in Part 3. Stay tuned.
Want to tweet about this article? Use hashtags #construction #sustainability #infrastructure #AI #IoT #5G #cloud #edge #AR #VR #blockchain