AI, Humans, and Businesses

     



    There is a great quote by B.F. Skinner, "The real problem is not whether machines think but whether men do. The mystery which surrounds a thinking machine already surrounds a thinking man." (Skinner, 2013). This quote on its own reflects many aspects of AI and human relations but what I want to point out is that humans want to be fooled and are often fooled into acceptance of a new status quo.

    For example, the Father of LLM's, Joseph Weizenbaum, created the first Chatbot in 1966 called Eliza. It was structured nothing like our current Large Language Models but was based on a script and what he discovered is that some people were " very hard to convince that Eliza (with its present script) is not human" (Tarnoff, 2023). This was because of a psychological phenomenon called transference, similar to personification or projection, it "refers to our tendency to project feelings about someone from our past on to someone in our present. While it is amplified by being in psychoanalysis, it is a feature of all relationships. When we interact with other people, we always bring a group of ghosts to the encounter. The residue of our earlier life, and above all our childhood, is the screen through which we see one another" (Tarnoff, 2023). 

    There is no inherent issue with this as humans are social creatures and have overtime seemingly created relationships with all sorts of things, from people, to creatures, and now to artificial intelligence. The main concern that I have with AI is that AI isn't the problem, rather humans are the problem. Not only do they seek to apply their own feelings on to an AI, but others will also seek to maximize their own gains.

    Consider that LLM's, at this point in time, are all owned or created by massive corporations. They have become more persuasive to humans and occasionally more "trusted" than other humans (Buchanan & Hickman, 2023). This is very concerning for many reasons. First, the concentration of power in a few large corporations raises issues of control and influence, potentially leading to monopolistic practices and the manipulation of information. Second, the persuasive nature of LLMs can be exploited for misinformation or biased agendas, subtly shaping public opinion and behaviors without adequate oversight. Lastly, the increasing trust placed in AI over human interactions may erode critical thinking and interpersonal skills, fundamentally altering social dynamics and individual decision-making processes.

    Businesses and other humans that have access to these models are incentivized to deploy AI in a manner that would more than likely be considered unethical. Seeing AI as just a tool is more helpful than seeing it as a partner, coworker, or something else. The view of AI needs to be constrained to that of a tool.

References

Buchanan, J., & Hickman, W. (2023). Do people trust humans more than chatgpt? SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4635674 

Skinner, B. F. (2013). Contingencies of Reinforcement: A theoretical analysis. XanEdu Publishing, Inc. 

Tarnoff, B. (2023, July 25). Weizenbaum’s nightmares: How the inventor of the first chatbot turned against AI. The Guardian. https://www.theguardian.com/technology/2023/jul/25/joseph-weizenbaum-inventor-eliza-chatbot-turned-against-artificial-intelligence-ai 


Comments

Popular posts from this blog

Week 1 - Why AI?

AI and Modern Marketing

What can't AI do?