Originally posted on inputmag.
Anyone who’s ever substantially coded before knows just how painstaking and time-consuming a project can often be, thanks to thousands of lines of programming and hours of tedious trial-and-error. A new program is aiming to ease that pain.
Created by GitHub in collaboration with Microsoft and OpenAI, Copilot is a new, collaborative artificial intelligence software able to make predictive suggestions and edits for programmers based on billions of publicly available lines of coding. “GitHub Copilot draws context from the code you’re working on, suggesting whole lines or entire functions,” GitHub CEO, Nat Friedman, explained in a blog post yesterday. “It helps you quickly discover alternative ways to solve problems, write tests, and explore new APIs without having to tediously tailor a search for answers on the internet.”
Setting aside the obvious jokes (for now) about computers covertly directing our coding to upgrade themselves into the dominant species, Github Copilot sounds like a pretty promising tool for programmers and software designers looking to streamline their projects, as well as gain a useful second set of eyes for their work. Copilot’s current “technical preview” available to a limited number of testers via a Visual Studio Code extension apparently works best with Python, JavaScript, TypeScript, Ruby, and Go, but promises “it understands dozens of languages and can help you find your way around almost anything.”
Given that Copilot is constantly learning from users’ inputs, it’s probably not a long stretch to see AI like this becoming increasingly helpful to even the most complex coding projects.
FAR FROM PERFECT — Coders don’t need to worry about programs like Github’s Copilot coming for their jobs anytime soon, though. According to Copilot’s FAQ, a recent, blanked-out benchmark coding test against a set of “Python functions that have good test coverage in open source repos” saw the model getting 43 percent of the answers right the first time around, and increasing to 57 percent within 10 attempts. “It’s getting smarter all the time,” promises GitHub, but it sounds like there’s still a long way to go before we can hand the reins entirely over to Copilot.
DECODING THE BIASES — As it stands, an AI is only as good as its designers, and designers are human, after all (for now — see bad robot overlord jokes above for reference). Machine learning and artificial intelligence programs like facial recognition still pose a whole host of ethical and privacy concerns. An increasing number of people are working to address these issues (GitHub included), but projects like Copilot will still need to be closely monitored and critiqued to ensure the most equitable and helpful assistance possible.
Big tech companies still love to tout artificial intelligence systems as an innovative solution to endemic human bias and racism… all despite numerous reports and extensive analysis repeatedly refuting its effectiveness. AI-driven insurance claims service, Lemonade, has apparently not seen any of this evidence to the contrary, however.
In fact, the company has managed to make a recent social media snafu even worse by trying to walk back boasts of its AI’s supposed ability to detect incriminating “non-verbal cues” and other possible indicators of fraud. In doing so, the insurer hasn’t just enraged its users, it’s directly contradicted its own SEC filings. And it’s made it sound a lot like it’s using AI for the insurance equivalent of phrenology.
IT STARTED OUT WITH A TWEET — As Vice reports, the little whoopsie-daisy stems from a now-deleted Twitter thread on Lemonade’s official account, in which it advertised its app’s video-claims service.
“For example, when a user files a claim, they record a video on their phone and explain what happened. Our AI carefully analyzes these videos for signs of fraud. It can pick up non-verbal cues that traditional insurers can’t, since they don’t use a digital claims service,” read one of the tweets.
Experts quickly pointed out the unreliability of AI-driven physiognomic analysis, and inherent bias issues within artificial intelligence programming generally. AI has a racism problem, so perhaps it shouldn’t be used to assess insurance claims.
HOW DID IT END UP LIKE THIS? — Lemonade hastily walked back its assertions with an apologetic blog post today, leading with, “TL;DR: We do not use, and we’re not trying to build, AI that uses physical or personal features to deny claims.”
And yet…
Seriously, Lemonade. Just read Algorithms of Oppression.
“ENTIRE CLAIM THROUGH RESOLUTION” — As CNN’s Rachel Metz highlights, Lemonade’s attempt to cover its posterior is demonstrably false if we’re judging from the companies own paperwork (which we are): “AI Jim handles the entire claim through resolution in approximately a third of cases, paying the claimant or declining the claim without human intervention (and with zero claims overhead, known as loss adjustment expense, or LAE),” reads Lemonade’s S-1 SEC filing, “AI Jim” referring to the company’s claims bot.
Just for reference, if you’re trying to market your AI insurance bot as “unbiased,” naming it “AI Jim” really doesn’t do much to further your case. Anyway, we’ll go ahead and join the chorus of people online by asking: Which is it, Lemonade? Is it true that you “have never, and will never, let AI auto-reject claims,” or does… um… AI Jim handle claims “through resolution” approximately 33% of the time? It can’t be both.
PROMISES OF CHANGE IN THE MONTHS TO COME — Lemonade assures everyone that “we have had ongoing conversations with regulators across the globe about fairness and AI” in past years, and promises “to look into the ways we use AI today and the ways in which we’ll use it going forward.” Lemonade promises “to share more about these topics in the coming months.” We look forward to hearing all about it on Twitter. And if we ever need to claim from Lemonade, we’re going to wear a big hat in our video so AI Jim can’t measure our head.
Source: inputmag