Blog Viewer

Natural Language Processing, de-mystified for communicators

By Sebastian James posted 05-06-2019 07:17 PM

  


About the above--
the ultimate measure of AI, the Turing Test, is a challenge of natural language processing (NLP). The test was developed in 1950 by Alan Turing and set a benchmark: a technology passed if a person could not tell the difference between talking to a human or computer. In May 2018, Google Duplex introduced a tool that may be a winner. Click on the link and decide for yourself.

 

What is natural language processing?

Toward Data Science defines NLP as “a sub-field of Artificial Intelligence that is focused on enabling computers to understand and process human languages, to get computers closer to a human-level understanding of language.” It was the technology that drove one side of the AI Assistant conversation in the above video, telling the machine audition piece what to say in response.

A conversation is a multi-person dance of syntax, semantics and pragmatics. Syntax is “the study of the structure of sentences, the principles, both universal and language specific, that govern how words are assembled to yield grammatical sentences,” according to the department of linguistics at the University of Penn. In the AI Assistant example, syntax is how the NLP understands the specific words said by the salon attendant and restaurant employee. 

Semantics is “the study of meaning in language,” according to Dr. Edward Vadja, of the Linguistics department at Western Washington University. Kai von Fintel, professor of linguistics at MIT, says “semantics tries to figure out what part of meaning is from the signal itself—words, morphemes, etc.” Meaning can be sense, reference, implication, word meanings and more. Think of it like this: if you walked up to a random person wearing a Kelly green sweater and said, “We’re going to beat you silly today”, the reaction would be different if the exchange were outside of Notre Dame Stadium before an Fighting Irish college football game.

Pragmatics also plays a big role in the outcome as well. Pragmatics studies the “ways in which context contributes to meaning,” according to linguist Jacob L. Mey. In his book “Pragmatics”, British linguist George Yule breaks it down into four dimensions; speaker meaning, contextual meaning, relative distance and the amount of what is communicated nonverbally. Clearly if you say “We’re going to beat you silly today” with a smile on your face, the random person might think it’s a joke. Might. If you have a U. of Michigan jersey on, the Irish fan already understands the rivalry and plays along.

Think of how you hear and react to tone in a conversation. Or are able to instantly understand which version of a word makes sense. Or the impact of silence on a conversation. Syntactics, semantics, and pragmatics are the underlying laws of language every human learns experientially. The trick for NLP is how to shortcut this process.

 

How do machines actually do this?

It’s all about semantics or finding the meaning in a sentence or phrase. They are defined as fundamental approaches. They are linguistic in nature, but technological in application. They are distributional semantics, frame semantics, model-theoretical semantics and interactive learning.

Distributional semantics—JR Firth, a leading figure in linguistics, once said “a word is characterized by the company it keeps.” Distributional semantics looks at how words in a sentence are used in relation to others as a method to help machines understand the entire sentence, and the language, by extension. They are good at analyzing the breadth of a language, but not its depth. Depth could include colloquialisms, regional dialects, slang and more. 

Frame-based—this approach focuses on analyzing phrases and sentences within a specific reference frame.  Let’s establish “Bob and I went to the bar to watch the game” as the frame. “I’m meeting Bob at the bar to watch the game” fits within the frame, because they both refer to two friends going to a bar to watch a game. “Bob and Larry are meeting me at the bar for the game” would lie outside of the frame because of Larry. 

Model-theoretical—In a paper by Mariya Yao on NLP, she cites the story of a noted NLP researcher and his very simple scenario to help make sense of model theoretical NLP: telling your phone to set a reminder. 

You say to your phone:

“Remind me to buy milk after my last meeting on Monday” 

With model-theoretical, the technology will analyze those words and related data to determine what it needs to know to fulfill the request. To complete this particular direction, the machine needs to parse the “when”. As long as it has access to schedule information, it can refer to the date and time to check for conflicts before setting the reminder.

 Model-theoretical depends upon two linguistic concepts; model theory and compositionality. According to Yao, “Model theory refers to the idea that sentences refer to the world,” like “circles are round” or “Alaska is a state in the United States.” Everything you need to know about the sentence can be found directly or indirectly by the meanings or information in the words of a sentence. Within that, compositionality assumes parts of the sentence can be processed to deduct the whole meaning. Once complete, the reminder appears on your phone.

Interactive learning—what if you could teach machines to perform a task even if it didn’t understand what you were saying? Interactive learning is where humans help machines accomplish tasks by telling the machine what to do, showing it what the result should look like and looking for the result to match the instruction. As the computer comprehends, a new direction is given. This is a step-by-step, language-agnostic, rule-based way for machines to understand words and sentences.

In a test of a language game called SHRDLRN, where a machine was instructed how to move a set of blocks from one location to another, it was found that humans using short, consistent directions fared better than other strategies. Also, since the machine knew nothing about language, the human could use any language or groups of characters to teach, provided they used them consistently.

These semantic approaches inform the development of many popular NLP tools.

 

A look at the tools of natural language processing

Automatic summarization—machines create short stories from data, like financial summaries, box scores, etc. Agolo has a summary tool you can test on their homepage. If you want to build your own, Google’s Tensor Flow is a user-friendly product. For fun, try the SummarizeBot.

Co-reference resolution—how do you figure out mentions and entities? Rather how do you determine which words refer to the same object? Red and wagon, blue and sky and other ways to identify an individual object in a conversation.

Machine translation—translate languages in near real-time. Unbabel’s products are for the customer service sector, offering near real-time chat translation. Chat responses can be localized for regional dialect. AI Translate offers six levels of accuracy across 120 languages.

Named entity recognition—what words map to proper names, people, organizations and their type. Progidy,Lexalitcs and Stride all have this expertise as part of a larger suite of tools. 

Natural language generation—more sophisticated than automatic summarization above, NLG turns data into copy easily read by humans. Every Monday morning during the NFL season,  Automated Insights turns game stats from 70 million Yahoo fantasy football games into personalized summaries for each fantasy manager.

Natural language understanding (NLU)—technology that can identify and understand syntax, semantics and pragmatics from text. The catchword for NLU is Conversational AI, focusing on helping machines understand the details of human speech for chatbots or machine audition/NLP applications. Semantic Machines, recently purchased by Microsoft, was a leader in this sector. 

Parsing—grammatical analysis of a given sentence 

Part of speech tagging—within a sentence, map the part of speech for each word. 

Question answering—programming a machine to provide answers to spoken or written questions. Jane.ai, a St. Louis, MO company has a range of solutions to help machines access information and respond to voice or text queries. 

Relationship extraction—similar to named entity recognition, this technology identifies relationships among named entities (Teo is Rosa’s cousin).

Sentence breaking—finding the beginning and ending of sentences.

Sentiment analysis—understanding the emotion in words, from simple automated keyword analysis to the fundamentals of emotion models. Lexalytics, mentioned above, is a respected company in this area.

Stemming—reducing words to their root—interesting, interested, interest. 

Terminology extraction—extract relevant terms from text on the fly, at the heart of dictation and translation. 

Topic segmentation—within text, identify segments related to a specific topic and identify the topic as well.

Any combination of the above would be incredibly valuable to a communicator interested in the deep dynamics of social conversation, analyzing petabytes of text, customer analysis at the scale of the internet, and more. In terms of options for agencies interested in exploring artificial intelligence, NLP offers more promise in terms of immediate impact.

 

NLP and Porter: is it too late for a forward-thinking agency to corner the market?

In terms of timing, yes. The Crimson Hexagon tool has been around since 2007, when it was still the project of a Harvard University professor. The True Global Intelligence, a tool from Fleishman-Hillard and other big PR partners was launched in 2016. However it looks like the bulk of the US activity is in the enterprise sector, not in small and medium sized companies, the state and federal government—or even among larger mid-sized PR firms. Which is where Porter might be more applicable.

In terms of differentiation, an enterprising firm with an idea of replicating the tools of the big firms but tailored to the needs of SMBs could work. Especially if one of the bigs is flexible enough to white-label their product. They’d stand apart as an economical provider of NLP services, or a specific type of service.

  

Strategies for success: take the advice of Fleishman-Hillard’s Natasha Kennedy

I interviewed Natasha Kennedy, Global Managing Director of Fleishman-Hillard’s True Global Intelligence, an AI tool developed jointly by Fleishman and other industry partners. One of the discussion points was on how agencies can efficiently add NLP and AI to their suite of services. From a wide-ranging discussion, I think this particular response makes the most sense for a PR leader ready to make a move into AI:

“I’d start with the basics, beginning with building a company culture that embraces data. PR people are admittedly numbers-averse—something they are going to have to get past. Your clients have more data to prove financial value, regulatory compliance and social responsibility. If you don’t understand data, how can you effectively tell their stories?”

“Leaders looking at AI and how to get started often feel intimidated by the daunting nature of how to dive into something without much knowledge. In an article I wrote for LinkedIn, I proposed a few ways to think differently about integration.”

“Instead of building, consider licensing. Is the technology available from a vendor looking for opportunities to white-label their product? Instead of inventing, consider investing. Consider what you could offer an AI startup; and how they could help you in return? Instead of the perfect product, consider a staged approach. Does the ultimate version of your tool need to launch in due time? Or can iterations of it go to market as versions of the original?”

Her best and most human piece of advice? “And finally, learn, learn, learn from your successes and failures. You may be nodding your head but are you really doing it? Do you have a process in place to not only learn but share and incorporate learnings into future thinking? This last part is really the key to better results next time,” she says.

 

One thing before we move along

NLP is the most powerful AI tool for communicators. It is a significantly more powerful follow up to Google Trends, the Keyword tool, and most every analytics tool out there. Communicators fearful of their jobs should make NLP the first thing they learn.

0 comments
22 views

Permalink