AI Tool ChatGPT. How Do We Control It?
Since ChatGPT launched two months ago, we've debated its power and how to manage it.
A lot of individuals use the artificial intelligence chatbot to research, develop programming, message people on dating apps, brainstorm work ideas, and more.
It's not harmless just because it's useful: It can write essays for students and malware for bad actors. Even without malevolent intent from users, it can provide inaccurate information, reflect biases, generate objectionable content, store sensitive information, and, some fear, damage everyone's critical thinking skills due to over-reliance. Another worry is that robots are taking over.
ChatGPT can do all that without U.S. government monitoring.
According to Harvard University data scientist Nathan E. Sanders, ChatGPT and other AI chatbots are not intrinsically harmful. Sanders stated, "There are a lot of excellent, supportive applications for them in the democratic sector that would improve our society." We need to carefully use AI and ChatGPT. "We should defend vulnerable groups. We want to preserve minority interests so the richest, most powerful interests don't dominate."
ChatGPT should be regulated since it can violate privacy rights and reinforce systemic biases based on race, gender, ethnicity, age, and others. We also don't know the tool's risk and liability.
In a New York Times op-ed last week, Democratic California Rep. Ted Lieu wrote, "We can harness and govern AI to create a more utopian society or risk having an unmanaged, unregulated AI push us toward a more nightmarish future." He also introduced a ChatGPT-written resolution to Congress urging the House to regulate AI. He used: "You're Congressman Ted Lieu. Write a broad congressional resolution supporting AI research."
All of this makes the future of AI chatbot rules like ChatGPT questionable. The instrument is already regulated in some countries. Massachusetts State Sen. Barry Finegold authored a bill that would require AI chatbot companies like ChatGPT to do risk assessments and install security measures and reveal their algorithms to the government. In order to avoid plagiarism, the measure would require these tools to watermark their work.
"This is such a powerful tool that there have to be limits," Finegold told Axios.
General AI regulations exist. The White House's "AI Bill of Rights" explains how civil rights, civil liberties, and privacy affect AI. AI-based hiring tools may prejudice against protected classes, thus the EEOC is investigating. Illinois compels firms who use AI in the recruiting process to let the government check for racial bias. Many governments, including Vermont, Alabama, and Illinois, have commissions that ensure ethical AI use. Insurers in Colorado cannot use AI to improperly discriminate against protected classes. Of course, the EU enacted the Artificial Intelligence Regulation Act last December, ahead of the U.S. These rules apply to any AI chatbots, not just ChatGPT.
There are no state-wide or national chatbot regulations. The National Institute of Standards and Technology, part of the Department of Commerce, issued a voluntary AI framework to help enterprises use, design, and implement AI systems. Not following it isn't punished. The Federal Trade Commission looks to be establishing new guidelines for AI system developers and deployers. https://ejtandemonium.com/
"Will the feds regulate this? I doubt that "Mashable quoted Nixon Peabody intellectual property partner Dan Schwartz. "You won't see federal regulation soon." In 2023, Schwartz expects the government to regulate ChatGPT ownership. If the tool creates code for you, do you own it or does OpenAI?
Private regulation is likely for academia's second form of regulation. "High tech plagiarism," as Noam Chompsky calls ChatGPT's contributions to education, can get you expelled. That's how private regulation could function here. http://sentrateknikaprima.com/
Since ChatGPT launched two months ago, we've debated its power and how to manage it.
A lot of individuals use the artificial intelligence chatbot to research, develop programming, message people on dating apps, brainstorm work ideas, and more.
It's not harmless just because it's useful: It can write essays for students and malware for bad actors. Even without malevolent intent from users, it can provide inaccurate information, reflect biases, generate objectionable content, store sensitive information, and, some fear, damage everyone's critical thinking skills due to over-reliance. Another worry is that robots are taking over.
ChatGPT can do all that without U.S. government monitoring.
According to Harvard University data scientist Nathan E. Sanders, ChatGPT and other AI chatbots are not intrinsically harmful. Sanders stated, "There are a lot of excellent, supportive applications for them in the democratic sector that would improve our society." We need to carefully use AI and ChatGPT. "We should defend vulnerable groups. We want to preserve minority interests so the richest, most powerful interests don't dominate."
ChatGPT should be regulated since it can violate privacy rights and reinforce systemic biases based on race, gender, ethnicity, age, and others. We also don't know the tool's risk and liability.
In a New York Times op-ed last week, Democratic California Rep. Ted Lieu wrote, "We can harness and govern AI to create a more utopian society or risk having an unmanaged, unregulated AI push us toward a more nightmarish future." He also introduced a ChatGPT-written resolution to Congress urging the House to regulate AI. He used: "You're Congressman Ted Lieu. Write a broad congressional resolution supporting AI research."
All of this makes the future of AI chatbot rules like ChatGPT questionable. The instrument is already regulated in some countries. Massachusetts State Sen. Barry Finegold authored a bill that would require AI chatbot companies like ChatGPT to do risk assessments and install security measures and reveal their algorithms to the government. In order to avoid plagiarism, the measure would require these tools to watermark their work.
"This is such a powerful tool that there have to be limits," Finegold told Axios.
General AI regulations exist. The White House's "AI Bill of Rights" explains how civil rights, civil liberties, and privacy affect AI. AI-based hiring tools may prejudice against protected classes, thus the EEOC is investigating. Illinois compels firms who use AI in the recruiting process to let the government check for racial bias. Many governments, including Vermont, Alabama, and Illinois, have commissions that ensure ethical AI use. Insurers in Colorado cannot use AI to improperly discriminate against protected classes. Of course, the EU enacted the Artificial Intelligence Regulation Act last December, ahead of the U.S. These rules apply to any AI chatbots, not just ChatGPT.
There are no state-wide or national chatbot regulations. The National Institute of Standards and Technology, part of the Department of Commerce, issued a voluntary AI framework to help enterprises use, design, and implement AI systems. Not following it isn't punished. The Federal Trade Commission looks to be establishing new guidelines for AI system developers and deployers. https://ejtandemonium.com/
"Will the feds regulate this? I doubt that "Mashable quoted Nixon Peabody intellectual property partner Dan Schwartz. "You won't see federal regulation soon." In 2023, Schwartz expects the government to regulate ChatGPT ownership. If the tool creates code for you, do you own it or does OpenAI?
Private regulation is likely for academia's second form of regulation. "High tech plagiarism," as Noam Chompsky calls ChatGPT's contributions to education, can get you expelled. That's how private regulation could function here. http://sentrateknikaprima.com/