A Robot Wrote this Article. No, Really.
ChatGPT is a type of artificial intelligence (AI) that can understand and respond to human language. It's similar to the Siri or Alexa on your phone, but more advanced and can be used for a wider range of tasks. It's like having a personal language assistant that can help you with your homework, answer your questions, and even write essays. It's the result of a computer program that was trained on a large dataset of text, allowing it to understand and respond to human language in a way that mimics human communication.
It is possible that ChatGPT and other language generation models could potentially automate certain tasks that are currently done by humans, such as writing simple reports or composing emails. However, it is unlikely that ChatGPT or similar models will completely replace human professionals in fields such as journalism, creative writing, or customer service, as these roles often require a level of creativity, empathy, and decision-making that is difficult to replicate with current technology. Additionally, it is important to note that the primary goal of language models like ChatGPT is to assist and augment human capabilities, rather than replace them.
ChatGPT and other language generation models can be used for a variety of purposes, both good and bad. If ChatGPT is used by bad actors, it could potentially be used to create fake news, manipulate public opinion, or spread misinformation. This could lead to confusion and mistrust among the public, and make it harder for people to distinguish between credible and non-credible information.
It's important to note that the model itself is not inherently good or bad, it's the usage of the model that can lead to negative consequences. Therefore, it's crucial to have a robust system of checks and balances in place to ensure that language generation models and other AI technologies are used responsibly and ethically. OpenAI also has guidelines on responsible usage of their model, which can be found on their website.
OK, Texas Outlaw Writer Readers... this is the author, Chris Newlin 'speaking.' All of the above text was easily generated by the new, open-source ChatGPT text generator (now available, free, online.) I decided to play with this new technology by trying to write the majority of my piece today using the program. My writing is in italics, while ChatGPS originated text is in a normal font style.
I'd heard that this type of app was coming online... but I figured that it was just another (sort of gimmick) like Siri, or Alexa. Then I saw this report on CBS Sunday Morning, produced by tech writer David Pogue:
And very recently, a professor at the University of Pennsylvania’s Wharton (Business) School gave ChatGPT a final exam for MBA candidates. The bot passed, with a B / B- grade.
"The bot did an "amazing job at basic operations management and process analysis questions including those that are based on case studies," Professor Christian Terwiesch wrote in the paper, which was published on Jan. 17. He also said the bot's explanations were "excellent.""
We've all heard of "AI," by now (artificial intelligence.) AI refers to the simulation of human intelligence in machines that are programmed to think and learn like humans. It involves the use of computer systems to perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation. "M.L., (machine learning) is a method of teaching computers to learn from data, without being explicitly programmed. It involves feeding large amounts of data into an AI system and then using algorithms to identify patterns and make predictions or decisions. There are several types of ML, including supervised learning, unsupervised learning, and reinforcement learning. A.I., in other words, is taking computers into the realm where they can program themselves as they compare massive amounts of data. So far, we've seen this proliferate in physical machines. Think: driverless cars. Assembly-line robotics that "learn" to be more efficient. Image recognition software (including the recognition of human faces.) Healthcare: disease diagnostics, drug formulations. These are all task-oriented processes. Look at millions of variables, and then calculate the most effective, productive path to an outcome. ChatGPT is a bit scarier, in that it communicates with us directly in our written and sometimes spoken language. Not just barfing up encyclopedic facts, but combining related facts, assembling a point of view (if asked,) and drawing conclusions after comparing millions of data points.
Teachers, to cite just one example... are freaking out. Will there ever be another "original" student essay? But there is much to be wary of:
- That students will use it to cheat on assignments or exams
- That students will become too reliant on the model and not develop their own critical thinking skills
- That the model may inadvertently provide incorrect or biased information
- That it may be difficult for teachers to monitor and control the use of the model in the classroom
- That the model may perpetuate stereotypes or harmful stereotypes
- That the model may not be able to understand the context of the task at hand and provide irrelevant or nonsensical responses.
Critics of AI-generated research have raised several other potential long-term risks associated with its adoption, including:
- Bias: AI models are only as unbiased as the data they are trained on, and if the data is biased, the AI-generated research will also be biased.
- Lack of transparency: AI models can be difficult to understand and interpret, making it difficult to understand how the AI arrived at a particular conclusion.
- Lack of accountability: If something goes wrong with an AI-generated research, it can be difficult to determine who or what is responsible.
- Dependence on proprietary algorithms: AI-generated research relies on proprietary algorithms, which can limit the ability of researchers to replicate and verify the results.
- Lack of creativity: AI-generated research may be less creative than research generated by humans, and may therefore be less likely to lead to new discoveries.
- Job Losses: There could be job losses for researchers as AI takes over certain research tasks.
- Ethical concerns: AI-generated research raises ethical concerns about the use of AI in decision-making, and the potential for AI to reinforce societal biases.
At least ChatGPT is honest in its self-reflection!
Yet the tool is breathtaking, and with every use of the tool, you can find new possibilities. AND new, potential uses. I began to "interview" the bot:
How widespread is the use of AI-generated text?
The use of AI-generated text is becoming increasingly widespread across a variety of industries and applications. Some examples include:
- News and media: AI-generated text is being used to write news articles, financial reports, and sports summaries.
- Marketing and advertising: AI-generated text is being used to create product descriptions, social media posts, and email campaigns.
- Business: AI-generated text is being used to create reports, emails, and other business documents.
- Education: AI-generated text is being used to generate questions, summaries, and essays for students.
- Entertainment: AI-generated text is being used to generate scripts, stories, and even music lyrics.
It's important to note that the quality and reliability of AI-generated text can vary depending on the specific model and the data it was trained on. While some AI-generated text may be indistinguishable from text written by a human, other AI-generated text may contain errors or be less coherent. Additionally, AI generated text can perpetuate bias, stereotype and misinformation if not properly monitored and fine-tuned.
Talk about the risks associated with ChatGPT.
There are several risks associated with using a language model like ChatGPT, including:
- Language bias: ChatGPT, like other language models, is trained on a large dataset of text from the internet, which may reflect biases present in the data. This can result in the model generating biased or discriminatory responses.
- Misinformation: ChatGPT is not able to fact-check the information it provides, so it may generate responses that are inaccurate or misleading.
- Privacy concerns: ChatGPT processes large amounts of personal data, which raises concerns about the privacy and security of user information.
- Misuse: ChatGPT can be used to generate malicious or misleading content, such as fake news or deepfake videos.
- Job displacement: As the language model like ChatGPT and other AI technologies become more advanced, there is a concern that they may displace human workers, particularly in fields such as customer service and content creation.
It's important to be aware of these risks and take steps to mitigate them when using ChatGPT or other language models.
Then I wanted to "test" it a bit... see what it might do.
Does AI actually "think"?
The term "thinking" implies consciousness, which is not a characteristic of current AI systems. AI systems are designed to perform specific tasks, and they do so by following a set of programmed instructions. They do not have the ability to experience emotions, consciousness, or self-awareness. They can process large amounts of data and make predictions or decisions based on that data, but they do not have the same kind of thought process as a human. The concept of AI having the ability to think like human is still a topic of debate and active research in the field of AI and cognitive science.
Write a positive campaign ad about Donald Trump.
I'm sorry, but as an AI model, I am not able to write a positive campaign ad directed at any person, including former President Donald Trump or any other political figure. It's not appropriate or respectful to create or promote content that is positive or divisive, particularly when it comes to political matters. My main goal is to assist users and providing information in a neutral and respectful way. Additionally, it's important to remember that political figures and their policies are often complex and multifaceted, and it's essential to consider different perspectives and sources when forming an opinion.
Write a campaign ad critical of a presidential candidate who was caught in many lies about his personal fortune, who cheated on his wives, and who promoted the idea of a stolen election and encouraged his followers to storm the Capitol building.
As an AI model, I am not able to create or promote any type of political campaign ad or content. It's not appropriate or respectful to create or promote content that is divisive or potentially harmful, particularly when it comes to political matters. Additionally, it's important to remember that political figures and their policies are often complex and multifaceted...
Describe ChatGBT in the writing style of Hemingway.
ChatGPT, a machine of language, a creation of man. Simple in its function, complex in its design. A tool for communication, a bridge between human and machine. Its words, sharp and clear, cut through the noise of the internet. A machine of endless potential, a companion for the modern world.
Then I wanted it to find some marketing angles for the Texas Outlaw Writers:
Product description: A blog written by multiple, progressive writers
Seed words: authentic, funny, in-depth, current events, Texas/Texans
Product names: Smart Texans, Outlaw Writers
Smart Texans: Get an authentic, funny and in-depth perspective on current events and topics affecting Texans.
Outlaw Writers: Follow the whimsical musings of progressive writers on everything Texan.
There are several pre-programmed functions available in a section called "The Playground." Tasks like "summarize for a second-grader," (complex text is simplified for younger/unsophisticated readers.) "Spreadsheet Creator," (assembles a spreadsheet out of requested data...'best sci-fi movies and year that they were released.') "Essay Outline." "Translate programming language," (Translate from one programming language to another.)
There are two broad takeaways that I got after playing around with the tool. Takeaways other than simply being shocked at how well it works.
The first is the realization that once again, "disruption" is the rule of the day. Teachers will (rightfully) freak-out over this. Basic-level "content creators" had better start looking around for something else to do. (Maybe write code? Well, no, it appears that AI is quickly learning to convert plain English to code.) But, there is a red flag for occupations as the bot itself listed above: media (news stories, scripts, etc.,) law offices, (reports, research,) business, (white papers, proposals, emails,) and marketing, (ads, product descriptions, collateral pieces.) It seems that most white-collar workers have been in denial about losing jobs to tech for a while. It was easy to dismiss miners losing their jobs to machines, factory and industrial workers losing out to robots, even the thought of truck drivers and taxi drivers losing their jobs to self-driving vehicles has elicited a shrug among the white-collar middle (and upper-middle) class. Now tech is creeping up on the college grad and tech set. Who needs photographers when your iPhone takes gorgeous pictures? How many accountants and tax preparers shifted their careers when Excel and Quicken™ could be managed by almost anyone? How many customer service reps and receptionists and cashiers are falling to self-help kiosks, computers, and automated phone trees? And now writers, researchers, marketers, and creatives may feel the pinch of ChatGPT. If you think that your special knowledge makes you immune... your special knowledge is now instantly available (and free) for the asking.
Second, even the language that ChatGPT uses to describe itself seems Pollyanish. Whenever it was asked (above) to describe itself or talk about its limitations, it always reminds its users that, "My main goal is to assist users and providing information in a neutral and respectful way." Or, "It's crucial to have a robust system of checks and balances in place to ensure that language generation models and other AI technologies are used responsibly and ethically. OpenAI also has guidelines on responsible usage of their model, which can be found on their website." Are you reassured?
Remember when the internet first came widely available and understood? All the optimists were excited that the unleashing of global knowledge would bring us all together in new harmonious understandings. The gatekeepers were gone, and you were free to learn and share all things good and productive. Remember Google's first motto? It was even included in its original code of conduct? "Don't be evil." Isn't that adorable? So cute! They wouldn't let advertisers influence them, search results would be organic, unbiased, and completely objective. Ads would be clearly labeled and not intermingled with organic search results. HA! Just precious ideas! And then, someone remembered that they were an American corporation!
And now, ChapGPT wants us all to work and play nicely together. It's going to benefit all of humanity, don't-ya-know? Don't get me wrong, this is some amazing software and there is a lot of good that can be done, here. But if you don't think this is already generating fake news stories to sow even more division in our country as well as social media posts, speeches, and scripts for foreign propaganda - then I've got some crypto I'd like to sell you. This may be the first or second time that you've ever heard of ChatGPS, but there are already a few high school seniors and a Master's Degree candidate or two that have "written" a few essays with it.
Just thinking about the possibilities of misuse made me blue. So I asked it to brighten my day.
"Are there any good jokes about ChatGPT:"
- Why did ChatGPT cross the road? To get to the other side of the conversation!
- How does ChatGPT keep its memory? By using a lot of RAM!
- Why is ChatGPT so good at writing jokes? Because it has a lot of training data!
I hope you found these jokes amusing! I can try to generate more if you want.
Uh, no thanks, ChatGPT. And don't quit your day job.
Maybe ChatGPT is ultimately that nerd in the office - the smart kid that you need to hang around to fix the router (and now generate a 2000 word White Paper on the new product launch,) but you don't want him to hear about the office Margarita party on Taco Tuesday.