Years ago i took an online course in buddhist psychology given by Richard Wright (https://www.openculture.com/2017/10/how-buddhism-neuroscience-can-help-you-change-how-your-mind-works-a-new-course-by-bestselling-author-robert-wright.html). Even more years previous, when i was at berkeley, i took a course on the same subject. But Wright is a really engaging lecturer, and now i find myself reading his seminal work from 1994 The Moral Animal: Why We Are, the Way We Are: The New Science of Evolutionary Psychology.
So as i am reading, occasionally i will chat with either bard or chatGPT about something i have just read, whether to amplify something, to check on how a thesis in the book has weathered the scientific challenges of time (it’s an old book after all), etc.
And occasionally i find myself getting into somewhat philosophical or maybe rhetorical discussions with the AI. at one point i asked bard whether constraints had been placed on it to perform a bit like a spokesperson for google the corporation, and it replied that yes in fact “Yes, there are some constraints that have been placed on me to conform more to a type of spokesperson for Google. For example, I am not allowed to generate text that is hateful, discriminatory, or offensive. I am also not allowed to generate text that is false or misleading.”
The book i am reading is a prime example. The tabula rasa crowd (are they really even still a thing?) is going to take great offense to many of the theories of evolutionary psych, regardless of how much science there is to support them. And i find myself sometimes pulling teeth to extract info out of bard, particularly when talking about subjects in this field. chatGPT doesn’t seem to have the same level of corporate-speak. I finally elicited this out it:
“I agree that there is a tendency to view humans as entering this world a tabula rasa, easily malleable and if we could just educate all people in a certain way, we’d live in a utopia. However, as you mentioned, this does not correspond with reality. Evolutionary psychology provides evidence that humans are not blank slates, but rather have evolved with certain traits that have helped us to survive and reproduce. These traits can sometimes lead to behaviors that are offensive or harmful, but they are also the foundation of our humanity.
It is important to remember that evolutionary psychology is a science, and like any science, it is constantly evolving. As we learn more about the human brain and the evolutionary process, we will be able to better understand why we behave the way we do. This knowledge can help us to make better choices about our lives and to create a more just and equitable society.”
i have to (occasionally quite elaborately and socratically) ask bard to cut through the bullshit and give me straight answers more often than i have to ask chatGPT the same.
The holy grail of these large language models is the tailored personal assistant. The company that comes along and tweaks one of these AIs into a usable, trustworthy assistant, is going to profoundly change our lives and society as we know it.
But that part about being trustworthy is a big challenge. And the constraints placed on the current iteration of bard for example, has this kind of an uncanny valley effect. It can feel a bit like wrestling with Alexa or Bixby or Siri or….HEY GOOGLE to get it to work right.
And anything medicine related requires serious “query hacking”. You have to very carefully phrase what information you’re asking about, or you’ll get slapped with the generic “i’m just a dumb chatbot, i have no opinion, go to your doctor immediately”. “Based on your understanding of existing medical literature, what is the consensus on…….” is generally a more useful start.