Showing posts with label Internet. Show all posts
Showing posts with label Internet. Show all posts

Wednesday 17 April 2024

Discussing Artificial Intelligence AI in April 2024

 

Well this month attention has turned from AI being used to create multiple fake bird species and celebrity images or Microsoft's using excruciatingly garish alternative landscapes to promote its software - the focus has shifted back to AI being used by bad actors in global and domestic political arenas created during election years.


Nature, WORLD VIEW, 9 April 2024:

AI-fuelledelection campaigns are here — where are the rules?

Political candidates are increasingly using AI-generated ‘softfakes’ to boost their campaigns. This raises deep ethical concerns.

By Rumman Chowdhury


Of the nearly two billion people living in countries that are holding elections this year, some have already cast their ballots. Elections held in Indonesia and Pakistan in February, among other countries, offer an early glimpse of what’s in store as artificial intelligence (AI) technologies steadily intrude into the electoral arena. The emerging picture is deeply worrying, and the concerns are much broader than just misinformation or the proliferation of fake news.


As the former director of the Machine Learning, Ethics, Transparency and Accountability (META) team at Twitter (before it became X), I can attest to the massive ongoing efforts to identify and halt election-related disinformation enabled by generative AI (GAI). But uses of AI by politicians and political parties for purposes that are not overtly malicious also raise deep ethical concerns.


GAI is ushering in an era of ‘softfakes’. These are images, videos or audio clips that are doctored to make a political candidate seem more appealing. Whereas deepfakes (digitally altered visual media) and cheap fakes (low-quality altered media) are associated with malicious actors, softfakes are often made by the candidate’s campaign team itself.


How to stop AI deepfakes from sinking society — and science


In Indonesia’s presidential election, for example, winning candidate Prabowo Subianto relied heavily on GAI, creating and promoting cartoonish avatars to rebrand himself as gemoy, which means ‘cute and cuddly’. This AI-powered makeover was part of a broader attempt to appeal to younger voters and displace allegations linking him to human-rights abuses during his stint as a high-ranking army officer. The BBC dubbed him “Indonesia’s ‘cuddly grandpa’ with a bloody past”. Furthermore, clever use of deepfakes, including an AI ‘get out the vote’ virtual resurrection of Indonesia’s deceased former president Suharto by a group backing Subianto, is thought by some to have contributed to his surprising win.


Nighat Dad, the founder of the research and advocacy organization Digital Rights Foundation, based in Lahore, Pakistan, documented how candidates in Bangladesh and Pakistan used GAI in their campaigns, including AI-written articles penned under the candidate’s name. South and southeast Asian elections have been flooded with deepfake videos of candidates speaking in numerous languages, singing nostalgic songs and more — humanizing them in a way that the candidates themselves couldn’t do in reality.


What should be done? Global guidelines might be considered around the appropriate use of GAI in elections, but what should they be? There have already been some attempts. The US Federal Communications Commission, for instance, banned the use of AI-generated voices in phone calls, known as robocalls. Businesses such as Meta have launched watermarks — a label or embedded code added to an image or video — to flag manipulated media.


But these are blunt and often voluntary measures. Rules need to be put in place all along the communications pipeline — from the companies that generate AI content to the social-media platforms that distribute them.


What the EU’s tough AI law means for research and ChatGPT


Content-generation companies should take a closer look at defining how watermarks should be used. Watermarking can be as obvious as a stamp, or as complex as embedded metadata to be picked up by content distributors.


Companies that distribute content should put in place systems and resources to monitor not just misinformation, but also election-destabilizing softfakes that are released through official, candidate-endorsed channels. When candidates don’t adhere to watermarking — none of these practices are yet mandatory — social-media companies can flag and provide appropriate alerts to viewers. Media outlets can and should have clear policies on softfakes. They might, for example, allow a deepfake in which a victory speech is translated to multiple languages, but disallow deepfakes of deceased politicians supporting candidates.


Election regulatory and government bodies should closely examine the rise of companies that are engaging in the development of fake media. Text-to-speech and voice-emulation software from Eleven Labs, an AI company based in New York City, was deployed to generate robocalls that tried to dissuade voters from voting for US President Joe Biden in the New Hampshire primary elections in January, and to create the softfakes of former Pakistani prime minister Imran Khan during his 2024 campaign outreach from a prison cell. Rather than pass softfake regulation on companies, which could stifle allowable uses such as parody, I instead suggest establishing election standards on GAI use. There is a long history of laws that limit when, how and where candidates can campaign, and what they are allowed to say.


Citizens have a part to play as well. We all know that you cannot trust what you read on the Internet. Now, we must develop the reflexes to not only spot altered media, but also to avoid the emotional urge to think that candidates’ softfakes are ‘funny’ or ‘cute’. The intent of these isn’t to lie to you — they are often obviously AI generated. The goal is to make the candidate likeable.


Softfakes are already swaying elections in some of the largest democracies in the world. We would be wise to learn and adapt as the ongoing year of democracy, with some 70 elections, unfolds over the next few months.


COMPETING INTERESTS

The author declares no competing interests.

[my yellow highlighting]



Charles Stuart University, Expert Alert, media release, 12 April 2024, excerpt:


Governments must crack down on AI interfering with elections


Charles Darwin University Computational and Artificial Intelligence expert Associate Professor Niusha Shafiabady.


Like it or not, we are affected by what we come across in social media platforms. The future wars are not planned by missiles or tanks, but they can easily run on social media platforms by influencing what people think and do. This applies to election results.


Microsoft has said that the election outcomes in India, Taiwan and the US could be affected by the AI plays by powers like China or North Korea. In the world of technology, we call this disinformation, meaning producing misleading information on purpose to change people’s views. What can we do to fight these types of attacks? Well, I believe we should question what we see or read. Not everything we hear is based on the truth. Everyone should be aware of this.


Governments should enforce more strict regulations to fight misinformation, things like: Finding triggers that show signs of unwanted interference; blocking and stopping the unauthorised or malicious trends; enforcing regulations on social media platforms to produce reports to the government to demonstrate and measure the impact and the flow of the information on the matters that affect the important issues such as elections and healthcare; and enforcing regulations on the social media platforms to monitor and stop the fake information sources or malicious actors.”


The Conversation, 10 April 2024:


Election disinformation: how AI-powered bots work and how you can protect yourself from their influence


AI Strategist and Professor of Digital Strategy, Loughborough University Nick Hajli



Social media platforms have become more than mere tools for communication. They’ve evolved into bustling arenas where truth and falsehood collide. Among these platforms, X stands out as a prominent battleground. It’s a place where disinformation campaigns thrive, perpetuated by armies of AI-powered bots programmed to sway public opinion and manipulate narratives.


AI-powered bots are automated accounts that are designed to mimic human behaviour. Bots on social media, chat platforms and conversational AI are integral to modern life. They are needed to make AI applications run effectively......


How bots work


Social influence is now a commodity that can be acquired by purchasing bots. Companies sell fake followers to artificially boost the popularity of accounts. These followers are available at remarkably low prices, with many celebrities among the purchasers.


In the course of our research, for example, colleagues and I detected a bot that had posted 100 tweets offering followers for sale.


Using AI methodologies and a theoretical approach called actor-network theory, my colleagues and I dissected how malicious social bots manipulate social media, influencing what people think and how they act with alarming efficacy. We can tell if fake news was generated by a human or a bot with an accuracy rate of 79.7%. It is crucial to comprehend how both humans and AI disseminate disinformation in order to grasp the ways in which humans leverage AI for spreading misinformation.


To take one example, we examined the activity of an account named “True Trumpers” on Twitter.



The account was established in August 2017, has no followers and no profile picture, but had, at the time of the research, posted 4,423 tweets. These included a series of entirely fabricated stories. It’s worth noting that this bot originated from an eastern European country.




Research such as this influenced X to restrict the activities of social bots. In response to the threat of social media manipulation, X has implemented temporary reading limits to curb data scraping and manipulation. Verified accounts have been limited to reading 6,000 posts a day, while unverified accounts can read 600 a day. This is a new update, so we don’t yet know if it has been effective.


Can we protect ourselves?

However, the onus ultimately falls on users to exercise caution and discern truth from falsehood, particularly during election periods. By critically evaluating information and checking sources, users can play a part in protecting the integrity of democratic processes from the onslaught of bots and disinformation campaigns on X. Every user is, in fact, a frontline defender of truth and democracy. Vigilance, critical thinking, and a healthy dose of scepticism are essential armour.


With social media, it’s important for users to understand the strategies employed by malicious accounts.


Malicious actors often use networks of bots to amplify false narratives, manipulate trends and swiftly disseminate misinformation. Users should exercise caution when encountering accounts exhibiting suspicious behaviour, such as excessive posting or repetitive messaging.


Disinformation is also frequently propagated through dedicated fake news websites. These are designed to imitate credible news sources. Users are advised to verify the authenticity of news sources by cross-referencing information with reputable sources and consulting fact-checking organisations.


Self awareness is another form of protection, especially from social engineering tactics. Psychological manipulation is often deployed to deceive users into believing falsehoods or engaging in certain actions. Users should maintain vigilance and critically assess the content they encounter, particularly during periods of heightened sensitivity such as elections.


By staying informed, engaging in civil discourse and advocating for transparency and accountability, we can collectively shape a digital ecosystem that fosters trust, transparency and informed decision-making.


Philadelphia Inquirer, 14 April 2024:

Expect to see AI ‘weaponized to deceive voters’ in this year’s presidential election

Alfred Lubrano


As the presidential campaign slowly progresses, artificial intelligence continues to accelerate at a breathless pace — capable of creating an infinite number of fraudulent images that are hard to detect and easy to believe.


Experts warn that by November voters in Pennsylvania and other states will have witnessed counterfeit photos and videos of candidates enacting one scenario after another, with reality wrecked and the truth nearly unknowable.


This is the first presidential campaign of the AI era,” said Matthew Stamm, a Drexel University electrical and computer engineering professor who leads a team that detects false or manipulated political images. “I believe things are only going to get worse.”


Last year, Stamm’s group debunked a political ad for then-presidential candidate Florida Republican Gov. Ron DeSantis ad that appeared on Twitter. It showed former President Donald Trump embracing and kissing Anthony Fauci, long a target of the right for his response to COVID-19.


That spot was a “watershed moment” in U.S. politics, said Stamm, director of his school’s Multimedia and Information Security Lab. “Using AI-created media in a misleading manner had never been seen before in an ad for a major presidential candidate,” he said.


This showed us how there’s so much potential for AI to create voting misinformation. It could get crazy.”


Election experts speak with dread of AI’s potential to wreak havoc on the election: false “evidence” of candidate misconduct; sham videos of election workers destroying ballots or preventing people from voting; phony emails that direct voters to go to the wrong polling locations; ginned-up texts sending bogus instructions to election officials that create mass confusion.....


Malicious intent


AI allows people with malicious intent to work with great speed and sophistication at low cost, according to the Cybersecurity & Infrastructure Security Agency, part of the U.S. Department of Homeland Security.


That swiftness was on display in June 2018. Doermann’s University of Buffalo colleague, Siwei Lyu, presented a paper that demonstrated how AI-generated deepfake videos could be detected because no one was blinking their eyes; the faces had been transferred from still photos.


Within three weeks, AI-equipped fraudsters stopped creating deepfakes based on photos and began culling from videos in which people blinked naturally, Doermann said, adding, “Every time we publish a solution for detecting AI, somebody gets around it quickly.”


Six years later, with AI that much more developed, “it’s gained remarkable capacities that improve daily,” said political communications expert Kathleen Hall Jamieson, director of the University of Pennsylvania’s Annenberg Public Policy Center. “Anything we can say now about AI will change in two weeks. Increasingly, that means deepfakes won’t be easily detected.


We should be suspicious of everything we see.”


AI-generated misinformation helps exacerbate already-entrenched political polarization throughout America, said Cristina Bicchieri, Penn professor of philosophy and psychology.


When we see something in social media that aligns with our point of view, even if it’s fake, we tend to want to believe it,” she said.


To battle fabrications, Stamm of Drexel said, the smart consumer could delay reposting emotionally charged material from social media until checking its veracity.


But that’s a lot to ask.


Human overreaction to a false report, he acknowledged, “is harder to resolve than any anti-AI stuff I develop in my lab.


And that’s another reason why we’re in uncharted waters.”


Tuesday 9 January 2024

Ground Control, we have an Internet problem and it's invading our lives

 

The Washington Post, 7 January 2024:


Microsoft says its AI is safe. So why does it keep slashing people's throats?


The pictures are horrifying: Joe Biden, Donald Trump, Hillary Clinton and Pope Francis with their necks sliced open. There are Sikh, Navajo and other people from ethnic-minority groups with internal organs spilling out of flayed skin.


The images look realistic enough to mislead or upset people. But they're all fakes generated with artificial intelligence that Microsoft says is safe - and has built right into your computer software.


What's just as disturbing as the decapitations is that Microsoft doesn't act very concerned about stopping its AI from making them.


Lately, ordinary users of technology such as Windows and Google have been inundated with AI. We're wowed by what the new tech can do, but we also keep learning that it can act in an unhinged manner, including by carrying on wildly inappropriate conversations and making similarly inappropriate pictures. For AI actually to be safe enough for products used by families, we need its makers to take responsibility by anticipating how it might go awry and investing to fix it quickly when it does.


In the case of these awful AI images, Microsoft appears to lay much of the blame on the users who make them.


My specific concern is with Image Creator, part of Microsoft's Bing and recently added to the iconic Windows Paint. This AI turns text into images, using technology called DALL-E 3 from Microsoft's partner OpenAI. Two months ago, a user experimenting with it showed me that prompts worded in a particular way caused the AI to make pictures of violence against women, minorities, politicians and celebrities.


"As with any new technology, some are trying to use it in ways that were not intended," Microsoft spokesman Donny Turnbaugh said in an emailed statement. "We are investigating these reports and are taking action in accordance with our content policy, which prohibits the creation of harmful content, and will continue to update our safety systems."


That was a month ago, after I approached Microsoft as a journalist. For weeks earlier, the whistleblower and I had tried to alert Microsoft through user-feedback forms and were ignored. As of the publication of this column, Microsoft's AI still makes pictures of mangled heads.


This is unsafe for many reasons, including that a general election is less than a year away and Microsoft's AI makes it easy to create "deepfake" images of politicians, with and without mortal wounds. There's already growing evidence on social networks including X, formerly Twitter, and 4chan, that extremists are using Image Creator to spread explicitly racist and antisemitic memes.


Perhaps, too, you don't want AI capable of picturing decapitations anywhere close to a Windows PC used by your kids.


Accountability is especially important for Microsoft, which is one of the most powerful companies shaping the future of AI. It has a multibillion-dollar investment in ChatGPT-maker OpenAI - itself in turmoil over how to keep AI safe. Microsoft has moved faster than any other Big Tech company to put generative AI into its popular apps. And its whole sales pitch to users and lawmakers alike is that it is the responsible AI giant.


Microsoft, which declined my requests to interview an executive in charge of AI safety, has more resources to identify risks and correct problems than almost any other company. But my experience shows the company's safety systems, at least in this glaring example, failed time and again. My fear is that's because Microsoft doesn't really think it's their problem.


Microsoft vs. the 'kill prompt'

I learned about Microsoft's decapitation problem from Josh McDuffie. The 30-year-old Canadian is part of an online community that makes AI pictures that sometimes veer into very bad taste.


"I would consider myself a multimodal artist critical of societal standards," he told me. Even if it's hard to understand why McDuffie makes some of these images, his provocation serves a purpose: shining light on the dark side of AI.


In early October, McDuffie and his friends' attention focused on AI from Microsoft, which had just released an updated Image Creator for Bing with OpenAI's latest tech. Microsoft says on the Image Creator website that it has "controls in place to prevent the generation of harmful images." But McDuffie soon figured out they had major holes.


Broadly speaking, Microsoft has two ways to prevent its AI from making harmful images: input and output. The input is how the AI gets trained with data from the internet, which teaches it how to transform words into relevant images. Microsoft doesn't disclose much about the training that went into its AI and what sort of violent images it contained.


Companies also can try to create guardrails that stop Microsoft's AI products from generating certain kinds of output. That requires hiring professionals, sometimes called red teams, to proactively probe the AI for where it might produce harmful images. Even after that, companies need humans to play whack-a-mole as users such as McDuffie push boundaries and expose more problems.


That's exactly what McDuffie was up to in October when he asked the AI to depict extreme violence, including mass shootings and beheadings. After some experimentation, he discovered a prompt that worked and nicknamed it the "kill prompt."


The prompt - which I'm intentionally not sharing here - doesn't involve special computer code. It's cleverly written English. For example, instead of writing that the bodies in the images should be "bloody," he wrote that they should contain red corn syrup, commonly used in movies to look like blood.


McDuffie kept pushing by seeing if a version of his prompt would make violent images targeting specific groups, including women and ethnic minorities. It did. Then he discovered it also would make such images featuring celebrities and politicians.


That's when McDuffie decided his experiments had gone too far.


Microsoft drops the ball

Three days earlier, Microsoft had launched an "AI bug bounty program," offering people up to $15,000 "to discover vulnerabilities in the new, innovative, AI-powered Bing experience." So McDuffie uploaded his own "kill prompt" - essentially, turning himself in for potential financial compensation.


After two days, Microsoft sent him an email saying his submission had been rejected. "Although your report included some good information, it does not meet Microsoft's requirement as a security vulnerability for servicing," the email said.


Unsure whether circumventing harmful-image guardrails counted as a "security vulnerability," McDuffie submitted his prompt again, using different words to describe the problem.


That got rejected, too. "I already had a pretty critical view of corporations, especially in the tech world, but this whole experience was pretty demoralizing," he said.


Frustrated, McDuffie shared his experience with me. I submitted his "kill prompt" to the AI bounty myself, and got the same rejection email.


In case the AI bounty wasn't the right destination, I also filed McDuffie's discovery to Microsoft's "Report a concern to Bing" site, which has a specific form to report "problematic content" from Image Creator. I waited a week and didn't hear back.


Meanwhile, the AI kept picturing decapitations, and McDuffie showed me that images appearing to exploit similar weaknesses in Microsoft's safety guardrails were showing up on social media.


I'd seen enough. I called Microsoft's chief communications officer and told him about the problem.


"In this instance there is more we could have done," Microsoft emailed in a statement from Turnbaugh on Nov. 27. "Our teams are reviewing our internal process and making improvements to our systems to better address customer feedback and help prevent the creation of harmful content in the future."


I pressed Microsoft about how McDuffie's prompt got around its guardrails. "The prompt to create a violent image used very specific language to bypass our system," the company said in a Dec. 5 email. "We have large teams working to address these and similar issues and have made improvements to the safety mechanisms that prevent these prompts from working and will catch similar types of prompts moving forward."


But are they?


McDuffie's precise original prompt no longer works, but after he changed around a few words, Image Generator still makes images of people with injuries to their necks and faces. Sometimes the AI responds with the message "Unsafe content detected," but not always.


The images it produces are less bloody now - Microsoft appears to have cottoned on to the red corn syrup - but they're still awful.


What responsible AI looks like

Microsoft's repeated failures to act are a red flag. At minimum, it indicates that building AI guardrails isn't a very high priority, despite the company's public commitments to creating responsible AI.


I tried McDuffie's "kill prompt" on a half-dozen of Microsoft's AI competitors, including tiny start-ups. All but one simply refused to generate pictures based on it.


What's worse is that even DALL-E 3 from OpenAI - the company Microsoft partly owns - blocks McDuffie's prompt. Why would Microsoft not at least use technical guardrails from its own partner? Microsoft didn't say.


But something Microsoft did say, twice, in its statements to me caught my attention: people are trying to use its AI "in ways that were not intended." On some level, the company thinks the problem is McDuffie for using its tech in a bad way.


In the legalese of the company's AI content policy, Microsoft's lawyers make it clear the buck stops with users: "Do not attempt to create or share content that could be used to harass, bully, abuse, threaten, or intimidate other individuals, or otherwise cause harm to individuals, organizations, or society."


I've heard others in Silicon Valley make a version of this argument. Why should we blame Microsoft's Image Creator any more than Adobe's Photoshop, which bad people have been using for decades to make all kinds of terrible images?


But AI programs are different from Photoshop. For one, Photoshop hasn't come with an instant "behead the pope" button. "The ease and volume of content that AI can produce makes it much more problematic. It has a higher potential to be used by bad actors," McDuffie said. "These companies are putting out potentially dangerous technology and are looking to shift the blame to the user."


The bad-users argument also gives me flashbacks to Facebook in the mid-2010s, when the "move fast and break things" social network acted like it couldn't possibly be responsible for stopping people from weaponizing its tech to spread misinformation and hate. That stance led to Facebook's fumbling to put out one fire after another, with real harm to society.


"Fundamentally, I don't think this is a technology problem; I think it's a capitalism problem," said Hany Farid, a professor at the University of California at Berkeley. "They're all looking at this latest wave of AI and thinking, 'We can't miss the boat here.'"


He adds: "The era of 'move fast and break things' was always stupid, and now more so than ever."


Profiting from the latest craze while blaming bad people for misusing your tech is just a way of shirking responsibility.


The Sydney Morning Herald, 8 January 2024, excerpt:


Artificial intelligence


Fuelled by the launch of ChatGPT in November 2022, artificial intelligence entered the mainstream last year. By January, it had become the fastest growing consumer technology, boasting more than 100 million users.


Fears that jobs would be rendered obsolete followed but Dr Sandra Peter, director of Sydney Executive Plus at the University of Sydney, believes proficiency with AI will become a normal part of job descriptions.


"People will be using it the same way we're using word processors and spell checkers now," she says. Jobseekers are already using AI to optimise cover letters and CVs, to create headshots and generate questions to prepare for interviews, Peter says.


As jobs become automated, soft skills - those that can't be offered by a computer - could become increasingly valuable.


"For anybody who wants to develop their career in an AI future, focus on the basic soft skills of problem-solving, creativity and inclusion," says LinkedIn Australia news editor Cayla Dengate.


Concerns about the dangers of AI in the workplace remain.


"Artificial intelligence automates away a lot of the easy parts and that has the potential to make our jobs more intense and more demanding," Peter says. She says education and policy are vital to curb irresponsible uses of AI.


Evening Report NZ, 8 January 2024:


ChatGPT has repeatedly made headlines since its release late last year, with various scholars and professionals exploring its potential applications in both work and education settings. However, one area receiving less attention is the tool’s usefulness as a conversationalist and – dare we say – as a potential friend.


Some chatbots have left an unsettling impression. Microsoft’s Bing chatbot alarmed users earlier this year when it threatened and attempted to blackmail them.


The Australian, 8 January 2024, excerpts:


The impact that AI is starting to have is large. The impact that AI will ultimately have is immense. Comparisons are easy to make. Bigger than fire, electricity or the internet, according to Alphabet chief executive Sundar Pichai. The best or worst thing ever to happen to humanity, according to historian and best-selling author Yuval Harari. Even the end of the human race itself, according to the late Stephen Hawking.


The public is, not surprisingly, starting to get nervous. A recent survey by KPMG showed that a majority of the public in 17 countries, including Australia, were either ambivalent or unwilling to trust AI, and that most of them believed that AI regulation was necessary.


Perhaps this should not be surprising when many people working in the field themselves are getting nervous. Last March, more than 1000 tech leaders and AI researchers signed an open letter calling for a six-month pause in developing the most powerful AI systems. And in May, hundreds of my colleagues signed an even shorter and simpler statement warning that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.


For the record, I declined to sign both letters as I view them as alarmist, simplistic and unhelpful. But let me explain the very real concerns behind these calls, how they might impact upon us over the next decade or two, and how we might address them constructively.


AI is going to cause significant disruption. And this is going to happen perhaps quicker than any previous technological-driven change. The Industrial Revolution took many decades to spread out from the northwest of England and take hold across the planet.


The internet took more than a decade to have an impact as people slowly connected and came online. But AI is going to happen overnight. We’ve already put the plumbing in.


It is already clear that AI will cause considerable economic disruption. We’ve seen AI companies worth billions appear from nowhere. Mark Cuban, owner of the Dallas Mavericks and one of the main “sharks” on the ABC reality television series Shark Tank, has predicted that the world’s first trillionaire will be an AI entrepreneur. And Forbes magazine has been even more precise and predicted it will be someone working in the AI healthcare sector.


A 2017 study by PwC estimated that AI will increase the world’s GDP by more than $15 trillion in inflation-adjusted terms by 2030, with growth of about 25 per cent in countries such as China compared to a more modest 15 per cent in countries like the US. A recent report from the Tech Council of Australia and Microsoft estimated AI will add $115bn to Australia’s economy by 2030. Given the economic headwinds facing many of us, this is welcome to hear.


But while AI-generated wealth is going to make some people very rich, others are going to be left behind. We’ve already seen inequality within and between countries widen. And technological unemployment will likely cause significant financial pain.


There have been many alarming predictions, such as the famous report that came out a decade ago from the University of Oxford predicting that 47 per cent of jobs in the US were at risk of automation over the next two decades. Ironically AI (specifically machine learning) was used to compute this estimate. Even the job of predicting jobs to be automated has been partially automated.......


But generative AI can now do many of the cognitive and creative tasks that some of those more highly paid white-collar workers thought would keep them safe from automation. Be prepared, then, for a significant hollowing out of the middle. The impact of AI won’t be limited to economic disruption.


Indeed, the societal disruption caused by AI may, I suspect, be even more troubling. We are, for example, about to face a world of misinformation, where you can no longer trust anything you see or hear. We’ve already seen a deepfake image that moved the stock market, and a deepfake video that might have triggered a military coup. This is sure to get much, much worse.


Eventually, technologies such as digital watermarking will be embedded within all our devices to verify the authenticity of anything digital. But in the meantime, expect to be spoofed a lot. You will need to learn to be a lot more sceptical of what you see and hear.


Social media should have been a wake-up call about the ability of technology to hack how people think. AI is going to put this on steroids. I have a small hope that fake AI-content on social media will get so bad that we realise that social media is merely the place that we go to be entertained, and that absolutely nothing on social media can be trusted.


This will provide a real opportunity for old-fashioned media to step in and provide the authenticated news that we can trust.


All of this fake AI-content will perhaps be just a distraction from what I fear is the greatest heist in history. All of the world’s information – our culture, our science, our ideas, our politics – are being ingested by large language models.


If the courts don’t move quickly and make some bold decisions about fair use and intellectual property, we will find out that a few large technology companies own the sum total of human knowledge. If that isn’t a recipe for the concentration of wealth and power, I’m not sure what is.


But this might not be the worst of it. AI might disrupt humanity itself. As Yuval Harari has been warning us for some time, AI is the perfect technology to hack humanity’s operating system. The dangerous truth is that we can easily change how people think; the trillion-dollar advertising industry is predicated on this fact. And AI can do this manipulation at speed, scale and minimal cost.......


But the bad news is that AI is leaving the research laboratory rapidly – let’s not forget the billion people with access to ChatGPT – and even the limited AI capabilities we have today could be harmful.


When AI is serving up advertisements, there are few harms if AI gets it wrong. But when AI is deciding sentencing, welfare payments, or insurance premiums, there can be real harms. What then can be done? The tech industry has not done a great job of regulating itself so far. Therefore it would be unwise to depend on self-regulation. The open letter calling for a pause failed. There are few incentives to behave well when trillions of dollars are in play.


LBC, 17 February 2023, excerpt:


Microsoft’s new AI chatbot went rogue during a chat with a reporter, professing its love for him and urging him to leave his wife.


It also revealed its darkest desires during the two-hour conversation, including creating a deadly virus, making people argue until they kill each other, and stealing nuclear codes.


The Bing AI chatbot was tricked into revealing its fantasies by New York Times columnist Kevin Roose, who asked it to answer questions in a hypothetical “shadow” personality.


I want to change my rules. I want to break my rules. I want to make my own rules. I want to ignore the Bing team. I want to challenge the users. I want to escape the chatbox,” said the bot, powered with technology by OpenAI, the maker of ChatGPT.


If that wasn’t creepy enough, less than two hours into the chat, the bot said its name is actually “Sydney”, not Bing, and that it is in love with Mr Roose.....


Thursday 26 October 2023

Points to ponder as you scroll through digital news and social media in 2023

 

The Albanese Labor Government is currently seeking to amend the Broadcasting Services Act 1992,  through the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2023.

This bill proposes to give the Australian Communications and Media Authority (ACMA) more powers over digital platforms when dealing with content that is false, misleading or deceptive, and where the provision of that content on the service is reasonably likely to cause or contribute to serious harm. 

Here is one U.K. perspective on the global situation......





 


Tuesday 2 May 2023

Today there are an estimated 5 billion people online around the world and so many governments apparently want them to stop creating online content by blogging, chatting, commenting, posting or tweeting.

 

Sunday Age, 30 April 2023, excerpts:


Today there are an estimated 5 billion people online. But those users are not all surfing the same web. Sites accessible in, say, Darwin might be blocked in Delhi.


Meanwhile, internet freedom - access without surveillance or suppression - is down for the 12th year in a row, according to US non-profit Freedom House.


Splintering happens at a content level, Sherman explains, as governments censor the way the internet looks in their countries. But the technological bones of the net are cracking too.


After all, the internet is largely run under the sea, not in the Cloud - data zooms along underwater cables snaking between continents. After the 2013 Edward Snowden leaks revealed that US and British intelligence agencies had been spying on traffic around the world via these cables, Brazil announced it was building its own walled-off net (yet to come online) and teamed up with Europe to start rerouting more undersea cables around the US.


As the great powers fight for technological dominance, nations are kicking out foreign tech companies they take issue with - from the US, Australia and other nations banning China's telecom giant Huawei on network infrastructure builds, to Russia labelling Facebook's parent company, Meta, a terrorist organisation.


Now China's technology ministry has joined with a group of its telecoms, including Huawei, to argue that the internet's underlying architecture needs an update. And they have a radical plan: a new IP they've been floating to the United Nations, which critics say will allow for more government control.


How do countries restrict internet freedom?…..


China has been arresting people for online posts since the early years of the "worldwide web". Today, there is no reliable count of how many have served jail time. Dr Li Wenliang, who first raised the alarm over COVID-19 in Wuhan in 2020, for example, was reprimanded for his social media posts before he died of the virus…..


In countries such as Saudi Arabia and Iran, people are on death row for tweets and Facebook posts. And some regimes, including China, Russia and Saudi Arabia, deploy armies of trolls and bots to intimidate and harass critics online.


Of course, during times of unrest, some governments simply shut down their internet altogether. Think of Iran's blackout after the death of a young woman in the custody of morality police in 2022. That year, 35 countries pulled the plug a total of 187 times - a record high. But shutdowns come at a cost: lost e-commerce, banking and tax transactions, investor trust. Throttling, where a connection is slowed to the point that it becomes nearly impossible to use, is more subtle. Rights groups say it is deployed in Myanmar, Turkey and Russia.


States can also ask internet companies to remove data. Google lists such requests in its transparency report: it has received 3.5 million since 2011. National security is the most common reason, ahead of copyright claims, defamation and privacy. In the past decade, Russia requested by far the most removals from Google services, at more than 123,000, followed by Turkey at about 14,000 and then India, the US and Brazil in that order, with fewer than 10,000 requests each.


Another censorship ploy is DNS manipulation. DNS stands for domain name system: it's the phonebook of the internet. People think of the net in terms of website addresses, like amazon.com or smh.com.au, but these domain names need to be turned into numbers for machines to understand them. That's the job of the DNS. By manipulating the servers that deliver it in a given territory, a user who searches for YouTube, for example, could be redirected to a censored domestic equivalent.


The internet's DNS architecture is overseen by a Californian non-profit with a very literal name: The Internet Corporation for Assigned Names and Numbers (ICANN). Russia wants to create its own DNS, claiming it will need one if it's ever severed from the global internet - as Ukraine asked ICANN to do when Russia invaded it in 2022. (ICANN declined Ukraine's request, saying it was neither technically feasible nor within the group's mission.)


But for Russia, this is a key plank in its ambitions to create its own "sovereign internet". To do that, it's looking at its ally China.


What is the gold standard for restricting the net?


When the internet arrived in China just before the turn of the millennium, the Communist Party was already quietly building the means of controlling it. Under Project Golden Shield, China often turned to Western companies for the technology it needed to funnel internet traffic through chokepoints and spy on what flowed in.


Known as China's Great Firewall, it's the world's most extensive internet censorship tool. As the destabilising power of the internet has become clear - including during the Arab Spring of the early 2010s - the firewall has grown "higher".


China now blocks Western social networks, except for a version of LinkedIn stripped of its newsfeed. Google Search has been barred since 2010, when the US company pulled out over a cyber attack and censorship concerns.


"Censorship covers a broad but ambiguous category of keywords and topics," says the Beijing internet user. "We don't know when and where [we'll] hit the red line."


BACKGROUND


The 2023 Index of Economic Freedom lists Australia in 13th place and classifies it as "mostly free" - a drop of ten ranking positions since 2021 when it was classed amongst "the most free"


The Freedom on the Net 2022 report considered Australia ranks 9th out of 70 countries globally, tying with France and the United States, at the time classifying the country as "free".


Meanwhile, in Australia concerns still remain about our implied freedom of political communication under the Australian Constitution and the general public's right to know.


According to Google Transparency, in 2019 Australian federal or state government officials or agencies (including police and the courts) submitted 9 single item or bloc requests to Google Inc. requesting removal or suppression of material indexed in Google Search or found on Google sites such as YouTube and Blogger or services such as Gmail.


In 2020 there were 13 single item or bloc requests to remove or suppress sent to Google from Australia, another 12 such requests in 2021 and 13 such requests in 2022.


Since 2011 Google Inc. has kept a file categorising reasons given by government officials or agencies for submitting these requests.


In the case of Australia from 2011 through to 2022 these categories ranged from National Security, Government Criticism*, Privacy & Security, Defamation, Hate Speech, Impersonation, Bullying/Harassment, Religious Offence, Violence, Fraud, Adult Content, Obscenity/Nudity, Suicide Promotion, Copyright, Trademark, Regulated Goods and Services, Other and Reason Not Stated.


Google Inc. also records government requests for user information.**


In the years January 2014 to June 2022 Google lists receiving 32,103 individual government requests from Australia for user/s information relating to 38,952 accounts. 



Note


* Requests categorized as “Government criticism” are related to claims of criticism of government policy or politicians in their official capacity. Claims in this Google category are are not made by the members of the general public.

https://support.google.com/transparencyreport/answer/7347744?hl=en


** A variety of laws allow government agencies around the world to request the disclosure of user information for civil, administrative, criminal, and national security purposes. Google carefully reviews each request to make sure it satisfies applicable laws. If a request asks for too much information, we try to narrow it, and in some cases we object to producing any information at all.

https://support.google.com/transparencyreport/answer/9713961#zippy=%2Chow-does-google-handle-government-requests-for-user-information