#039 ChatGPT Integrates With Canva, Yale’s Approach To Teaching And Learning With Generative AI, Google to Crack Down on AI-Generated Election Ads.
Fresh & Hot curated AI happenings in one snack. Never miss a byte 🍔
This snack byte will take approx 6 minutes to consume.
AI BYTE # 1 📢 - ChatGPT and Canva: A Powerful Combination for Visual Content Creation And Social Marketing.
⭐ Social marketing is a rapidly evolving field that requires businesses and entrepreneurs to create engaging and attractive visuals for their audiences.
However, not everyone has the time or skills to design their own logos, banners, flyers, and other graphics. That’s why OpenAI, the leading AI research company, has unveiled a new plugin for its popular chatbot, ChatGPT, that integrates with Canva, the online design platform.
ChatGPT is a chatbot that uses OpenAI’s GPT-4 language model, which can generate realistic text and images based on user input. ChatGPT can help users with various tasks, such as writing blog posts, creating tweets, generating captions, and more.
Now, with the Canva plugin, users can also create stunning visuals with just a few clicks.
To use this feature, users need to subscribe to ChatGPT Plus, which costs $20 per month and grants access to the Canva plugin and the updated GPT-4 model. Then, users can initiate the chatbot on their browser and install the Canva plugin from the plugin store.
After that, users can describe the visual they want to create in the chatbot’s prompt box, such as "I am an AI enthusiast active on Twitter. Create a banner for my account."
The chatbot will then generate a list of visuals from which users can choose their favorite option, edit it in Canva, and download it directly to use as needed.
This integration could improve the way users generate visuals by offering a streamlined and user-friendly approach to digital design. Users no longer have to do a lot of manual work or switch between different platforms to create their graphics. They can simply use ChatGPT’s natural language interface and Canva’s intuitive editing tools to produce high-quality visuals in minutes.
This is not the first time that OpenAI has expanded the capabilities of ChatGPT to stay ahead in the competitive AI sector.
Earlier this year, OpenAI launched a feature to browse the web with Microsoft Bing, but it had to remove it after users found out that it could be used to access paywalled content.
OpenAI has also been facing growing competition from other powerful models such as Claude AI, which can handle up to 100,000 tokens of context and read PDFs, and Google’s Bard, which is also gearing up to implement its own plugin system.
OpenAI’s initiative to integrate Canva with ChatGPT aligns with its broader strategy to enhance the chatbot’s capabilities, making it a versatile tool that caters to various user needs.
By tapping into the social marketing space, OpenAI is demonstrating its vision to create artificial intelligence that can benefit humanity in multiple domains.
AI BYTE # 2 📢 - ChatGPT In The Classroom: Yale’s Approach To Teaching And Learning With Generative AI
⭐ ChatGPT and other large language models have been making headlines for their potential applications and implications in various domains, including education.
How are educators and students using ChatGPT and other generative AI tools in the classroom? How are they addressing the ethical and pedagogical challenges posed by these technologies?
Jenny Frederick is an Associate Provost at Yale University and the Founding Director of the Poorvu Center for Teaching and Learning. It provides resources for faculty and students. She helps lead Yale’s approach to ChatGPT.
Frederick explained that Yale never considered banning ChatGPT and instead wants to work with it. She said that Generative AI is new, but asking students to learn what machines can do is not.
She gave the example of calculus, which has been taught for decades despite the existence of calculators. She said that teachers need to revisit their learning objectives and justify why they are asking students to do certain tasks that machines can also do.
She also said that it is too early to institute prescriptive policies about how students can use ChatGPT and other Generative AI tools. She said that Yale wants to encourage an environment of learning and experimentation and that teachers should look to their students for guidance.
She said that students are ahead of the faculty in using these technologies and that they want to use them responsibly. She suggested that teachers should try out these tools themselves, and have a conversation with their students about how and when they are allowed to use them.
Frederick acknowledged that there are some risks and challenges associated with using ChatGPT and other Generative AI tools in education. She said that cheating is one concern, but not the main one.
She said that cheating is more related to the mental health and time management of students, rather than the availability of these tools. She said that Yale wants to help students avoid getting into situations where they feel tempted to cheat.
She also said that privacy is another concern, especially when students input their own information into these systems. She said that Yale has strict data management policies, and that teachers should be aware of how these systems work and how they store, process, or monitor the inputs.
She said that there are also ethical questions about providing labor to OpenAI or other corporations that own these systems.
Frederick concluded by saying that ChatGPT and other generative AI tools are here to stay, and that Yale wants to prepare its students for a world where these technologies are integrated in various industries.
She said that Yale sees this as an opportunity to enhance teaching and learning, rather than a threat.
I found Frederick’s insights very enlightening and inspiring. I think that Yale’s approach to ChatGPT and other Generative AI tools is innovative and ethical and that it sets an example for other higher education institutions.
I would love to hear your thoughts on this topic. Do you agree with Yale’s approach?
AI BYTE # 3 📢 - Google to Crack Down on AI-Generated Election Ads with Disclosure Requirements
⭐ As AI becomes more advanced and accessible, it also poses new challenges and opportunities for political campaigns.
One of the areas where AI can have a significant impact is in the creation and distribution of election ads, which can influence voters’ opinions and behaviors.
Google, one of the largest platforms for online advertising, has recently announced a new policy that will require verified election advertisers to make “clear and conspicuous” disclosures when their advertisements contain AI-generated content.
This includes images, videos, and audio that are created or modified by AI algorithms, such as deepfakes, text-to-speech, and generative adversarial networks (GANs).
According to Google’s blog post, the update to its political content policy will arrive in mid-November and will apply to all ads that target voters in the U.S. The disclosures will have to be placed in clear locations on the ads, such as on the top or bottom of the screen, or as an overlay on the content.
The policy will not affect videos uploaded to YouTube that are not paid advertising, even if they are uploaded by political campaigns.
Google’s policy is aimed at increasing transparency and accountability in election advertising, as well as protecting voters from misinformation and manipulation.
AI-generated content can be used for various purposes, such as creating realistic portraits of candidates, generating catchy slogans and headlines, or producing fake endorsements or testimonials.
While some of these uses can be benign or creative, others can be deceptive or malicious, especially if they are not disclosed to the viewers.
Google is not the only tech company that has faced scrutiny over its handling of misinformation. YouTube, which was acquired by Google in 2006, faced backlash after announcing this year it would stop taking down content containing false claims about the 2020 presidential election.
X, formerly known as Twitter, does not have specific guidelines for AI-generated ad content.
Meta, the owner of Instagram and Facebook, does not have similar policies either, though it does have a ban against “manipulated media” such as deepfakes.
As the election approaches, we can expect to see more AI-generated content in election ads, as well as more regulations and policies from tech companies and governments.
As consumers of technology news, we should be aware of the potential benefits and risks of AI-generated content, and how it can affect our democracy and society.
We should also educate ourselves and others on how to identify and verify AI-generated content, and how to report any violations or abuses.
What are your thoughts on Google’s new policy on AI-generated election ads? Do you think it will be effective and enforceable?
How do you think AI-generated content will shape the future of political communication? Share your opinions and insights in the comments below.





