Toggle light / dark theme

Codegen raises new cash to automate software engineering tasks

face_with_colon_three Basically although some or all coding jobs could be absorbed I remain positive because now everyone be a god now when infinite computation comes out and also infinite agi.


Jay Hack, an AI researcher with a background in natural language processing and computer vision, came to the realization several years ago that large language models (LLMs) — think OpenAI’s GPT-4 or ChatGPT — have the potential to make developers more productive by translating natural language requests into code.

After working at Palantir as a machine learning engineer and building and selling Mira, an AI-powered shopping startup for cosmetics, Hack began experimenting with LLMs to execute pull requests — the process of merging new code changes with main project repositories. With the help of a small team, Hack slowly expanded these experiments into a platform, Codegen, that attempts to automate as many mundane, repetitive software engineering tasks as possible leveraging LLMs.

“Codegen automates the menial labor out of software engineering by empowering AI agents to ship code,” Hack told TechCrunch in an email interview. “The platform enables companies to move significantly quicker and eliminates costs from tech debt and maintenance, allowing companies to focus on product innovation.”

Hackers Could Exploit Google Workspace and Cloud Platform for Ransomware Attacks

A set of novel attack methods has been demonstrated against Google Workspace and the Google Cloud Platform that could be potentially leveraged by threat actors to conduct ransomware, data exfiltration, and password recovery attacks.

“Starting from a single compromised machine, threat actors could progress in several ways: they could move to other cloned machines with GCPW installed, gain access to the cloud platform with custom permissions, or decrypt locally stored passwords to continue their attack beyond the Google ecosystem,” Martin Zugec, technical solutions director at Bitdefender, said in a new report.

A prerequisite for these attacks is that the bad actor has already gained access to a local machine through other means, prompting Google to mark the bug as not eligible for fixing “since it’s outside of our threat model and the behavior is in line with Chrome’s practices of storing local data.”

Google’s New Titan Security Key Adds Another Piece to the Password-Killing Puzzle

As part of its announcement at the Aspen Cyber Summit in New York City today, Google also said that in 2024 it will give 100,000 of the new Titan keys to high-risk individuals around the world. The effort is part of Google’s Advanced Protection Program, which offers vulnerable users expanded account monitoring and threat protection. The company has given away Titan keys through the program in the past, and today it cited the rise of phishing attacks and upcoming global elections as two examples of the need to continue expanding the use of secure authentication methods like passkeys.

Hardware authentication tokens have unique protective benefits because they are siloed, stand-alone devices. But they still need to be rigorously secured to ensure they don’t introduce a different point of weakness. And as with any product, they can have vulnerabilities. In 2019, for example, Google recalled and replaced its Titan BLE-branded security key because of a flaw in its Bluetooth implementation.

When it comes to the new Titan generation, Google tells WIRED that, as with all of its products, it conducted an extensive internal security review on the devices and it also contracted with two external auditors, NCC Group and Ninja Labs, to conduct independent assessments of the new key.

Chinese company uses quantum numbers to minimize cybersecurity threats

The addition of an additional step in a long-established workflow can help reduce substantial costs show cybersecurity researchers.


Sakkmesterke/iStock.

The increasing use of cloud storage has increased the risks to data security, and cybersecurity researchers have been looking at distributed cloud storage as a plausible solution to this problem.

5 ways to build an Alzheimer’s-resistant brain | Lisa Genova

Only 2% of Alzheimer’s is 100% genetic. The rest is up to your daily habits.

Up Next ► 4 ways to hack your memory https://youtu.be/SCsztDMGP7o.

People want a perfect memory. They wish that they can remember everything that they want to remember. But it doesn’t work like that.

Most people over the age of 50 think that forgetting someone’s name or forgetting why they went into the kitchen is a sign of Alzheimer’s. It isn’t. Most of our forgetfulness is perfectly normal.

If you are worried about developing Alzheimer’s or another form of dementia, some simple lifestyle modifications can help prevent it: getting enough sleep, exercising, eating a balanced diet, and managing stress.

Read the video transcript ► https://bigthink.com/videos/cognitive-decline/

BlueNoroff hackers backdoor Macs with new ObjCShellz malware

The North Korean-backed BlueNorOff threat group targets Apple customers with new macOS malware tracked as ObjCShellz that can open remote shells on compromised devices.

BlueNorOff is a financially motivated hacking group known for attacking cryptocurrency exchanges and financial organizations such as venture capital firms and banks worldwide.

The malicious payload observed by Jamf malware analysts (labeled ProcessRequest) communicates with the swissborg[.]blog, an attacker-controlled domain registered on May 31 and hosted at 104.168.214[.]151 (an IP address part of BlueNorOff infrastructure).

OpenAI blames DDoS attack for ongoing ChatGPT outage

OpenAI has confirmed that a distributed denial-of-service (DDoS) attack is behind “periodic outages” affecting ChatGPT and its developer tools.

ChatGPT, OpenAI’s AI-powered chatbot, has been experiencing sporadic outages for the past 24 hours. Users who attempted to access the service have been greeted with a message stating that “ChatGPT is at capacity right now,” and others, including TechCrunch, have been unable to log into the service.

OpenAI CEO Sam Altman initially blamed the issue on interest in the platform’s new features, unveiled at the company’s first developer conference on Monday, “far outpacing our expectations.” OpenAI said the issue was fixed at approximately 1 p.m. PST on November 8.

N. Korea’s BlueNoroff Blamed for Hacking macOS Machines with ObjCShellz Malware

The development arrives days after Elastic Security Labs disclosed the Lazarus Group’s use of a new macOS malware called KANDYKORN to target blockchain engineers.

Also linked to the threat actor is a macOS malware referred to as RustBucket, an AppleScript-based backdoor that’s designed to retrieve a second-stage payload from an attacker-controlled server.

In these attacks, prospective targets are lured under the pretext of offering them investment advice or a job, only to kick-start the infection chain by means of a decoy document.

Fake everything: how machine learning is being used to fight back against disinformation campaigns

Another good use for AI. Fighting disinformation.


About 60% of adults in the US who get their news through social media have, largely unknowingly, shared false information, according to a poll by the Pew Research Center. The ease at which disinformation is spread and the severity of consequences it brings — from election hacking to character assassination — make it an issue of grave concern for us all.

One of the best ways to combat the spread of fake news on the internet is to understand where the false information was started and how it was disseminated. And that’s exactly what Camille Francois, the chief innovation officer at Graphika, is doing. She’s dedicated to bringing to light disinformation campaigns before they take hold.

Francois and her team are employing machine learning to map out online communities and better understand how information flows through networks. It’s a bold and necessary crusade as troll farms, deep fakes, and false information bombard the typical internet user every single day.

Francois says, this work is two parts technology, one part sociology. The techniques are always evolving, and we have to stay one step ahead.” We sit down with Francois for an in-depth discussion on how the tech works and what it means for the dissemination of information across the internet.

Chatbots are so gullible, they’ll take directions from hackers

‘Prompt injection’ attacks haven’t caused giant problems yet. But it’s a matter of time, researchers say.

Imagine a chatbot is applying for a job as your personal assistant. The pros: This chatbot is powered by a cutting-edge large language model. It can write your emails, search your files, summarize websites and converse with you.

The con: It will take orders from absolutely anyone.

AI chatbots are good at many things, but they struggle to tell the difference between legitimate commands from their users and manipulative commands from outsiders. It’s an AI Achilles’ heel, cybersecurity researchers say, and it’s a matter of time before attackers take advantage of it.


“Prompt injection” is a major risk to large language models and the chatbots they power. Here’s how the attack works, examples and potential fallout.

/* */