To get the best out of AI you need to write and speak well
Everyone is talking about ChatGPT, whether they are Joe Sixpack users like me or well-known figures in the computer world. There are numerous startups in the space, and well-known tech behemoths are quickly incorporating GPT-like features into their offerings.
Austin Henley, on the other hand, laments that "people are poor at words" and criticises ChatGPT's limitations. As Program Manager, he directs the Microsoft PROSE research team's efforts to develop AI-assisted programming tools.
In his blog, he says it's "lazy" to expect users to communicate with software primarily in natural language. He complains that it is entirely the user's responsibility to formulate intelligent queries. What questions to ask, when to ask them, how to ask them, how to interpret the answers, and finally, how many times to ask them.
The process of producing and exchanging meaning through dialogue is the fundamental concept of communication.
This is what he's talking about.
He is upset about the alleged constraints of the text box.
In my opinion, the dialogue interaction he depicts is not always negative.
Why?
In the construction of communication messages, humans and machines are now co-learners and co-travellers.
Writing helps people improve their ability to reason, generate new ideas, and present compelling arguments.
According to Mersham, writing is a psychological technology that has evolved to support abstract thought.
When you write, your memory expands, critical analysis becomes easier, and your thinking becomes clearer.
Writing forces us to think more slowly.
Daniel Kahneman, the Nobel laureate for behavioural economics, wrote Thinking Fast and Slow. He discusses the trade-offs between deliberate thought and reactive thought and how writing assists with clear thoughtful processes.
For the best results, all text-input generative AI technologies require knowledgeable inputs. The art and science of knowing how to communicate with each AI tool, known as "prompt engineering," is a relatively new skill. It is comparable to search engine optimisation (SEO) in that a cottage industry of tools and consultants will emerge for learning how to use knowledge about which words produce which results. Prompt engineering" is young, emergent and requires an understanding of communication context and semantic differentials.
According to the Harvard Business Review, when AI is widely deployed, personnel at all levels of the hierarchy will supplement their own judgement and intuition with the recommendations of algorithms to come to better conclusions than either humans or machines could independently.
ChatGPT, says Azeem Azhar, is "a beneficial second brain" that aids in the development of critical thinking. He describes three aspects of ChatGPT that improve our thinking capacity: the divergent thinker, the communicator, and the demanding critic.
Let's go over each of these in detail with an example.
Diverse thinking
When evaluating a topic, it is critical to examine it from various perspectives. I might think about it in terms of economies of scale, one-of-a-kind incentives, or broader social implications. One of Google's earlier insights was that code engineers are confined to a walled garden, with no regard for broader (and sometimes negative) social ramifications. When I asked ChatGBT about this ethical quandary, it responded like this:
The communicator
Requesting that ChatGPT apply a framework to a new subject is also beneficial. According to a collaborative-constructivist perspective, communication and engagement are at the heart of learning, as evidenced by my published research and 20 years of online teaching experience. According to this viewpoint, it is critical to allow for the collaborative creation of meaning and knowledge through the testing of ideas. In this case, the person and the machine exchange information, provide feedback, and test each other.
It can be useful to ask ChatGPT to apply a framework to a new topic; this forces the prompter to ask the right questions and can be used to investigate counterfactual scenarios.
Heres an example:
ChatGPT, like the Post-It notes collected during a brainstorming session, can occasionally suggest intriguing directions for further investigation.
However, for this approach to be effective, people at all levels of an organisation must trust the algorithms' recommendations and feel empowered to make decisions. This will put the traditional top-down approach found in large, traditionally structured organisations to the test. Employees will be discouraged from using AI if they must consult with a higher-up before taking action.
Giving up control is something no manager wants to do, but it is necessary to achieve dialogue (Mersham 2014). Unpredictability is always anathema to commercial interests.
Afterword
Creating technologies that can work with and produce human language content is the focus of a subfield of computer science known as Natural Language Processing (NLP) and Natural Language Generation (NLG) are computer science subfields that focus on content communicative machines. The result is what Cynthia Breazeal, the "mother of social robotics," refers to as "sociable robots" — socially intelligent synthetic creatures capable of understanding us, communicating and interacting with us, learning from us, and growing alongside us in their role as collaborators and companions.
Unfortunately, as I mentioned in a previous post, the two disciplines that deal with these topics — engineering and interpersonal communication — have largely failed to recognise and/or capitalise on this interdisciplinary opportunity and challenge.
Engineers, for their part, have attempted to either reinvent the wheel or seek advice from research and researchers in other disciplines, such as social linguistics or psychology. Communication scholars, who have in fact spent decades studying human-to-human interpersonal relationships and producing the kind of knowledge base necessary for developing more robust conversational models, have not done much better to make their insights known.
Communication scientists have frequently limited their research efforts and findings to human communication, and when dealing with computers or bots, they have typically regarded the mechanism as a medium of human communicative exchange — what is known as "computer mediated communication," or CMC.
On the other hand, computer scientists have long referred Information Communication Technologies (ICT’s), focussing inevitably on the “Information” and “Technology” aspects, assuming that the “C” takes place in the simple transmission of data and less on how meaning is created by humans.
Computer scientists, on the other hand, have long referred to Information Communication Technologies (ICTs), inevitably focusing on the "Information" and "Technology" aspects, assuming that the "C" occurs in the simple transmission of data and less on how meaning is created by humans.
Contrary to popular belief, communication — specifically, conversational interpersonal dialogue — has always defined AI. Alan Turing's "Computing Machinery and Intelligence," first published in the Journal Mind in 1950, is credited with defining machine intelligence. Although the term "artificial intelligence" was coined six years later, Turing's seminal paper and the "game of imitation" it describes — now commonly referred to as "the Turing Test" — define and characterise the field.
"The idea of the test," Turing explained, "is that the machine has to try and pretend to be a man, by answering questions put to it, and it will only pass if the pretence is reasonably convincing….". Turing stipulated that if a computer is capable of successfully simulating a human being in communicative exchanges, that is, if a person “cannot tell whether they are talking with a machine or another human being, then that device would need to be considered intelligent”.
Voice interaction is already a huge growth trend in search — increasingly, mobile searches use voice. We can anticipate that the requirement for ChatGPT users to use the spoken word correctly will result in much better results and save time. Humans can speak at a rate of 150 words per minute while typing at a rate of 40 words per minute. Speech search is more discursive than text search in that it employs longer phrases, including complete sentences and questions. This gives ChatGBT users more finesse.