‘Alien’ Signal Decoded

An anonymous reader quotes a report from the European Space Agency: White dots arranged in five clusters against a black background (PNG). This is the simulated extraterrestrial signal transmitted from Mars and deciphered by a father and a daughter on Earth after a year-long decoding effort. On June 7, 2024, media artist Daniela de Paulis received this simple, retro-looking image depicting five amino acids in her inbox. It was the solution to a cosmic puzzle beamed from ESA's ExoMars Trace Gas Orbiter (TGO) in May 2023, when the European spacecraft played alien as part of the multidisciplinary art project 'A Sign in Space.' After three radio astronomy observatories on Earth intercepted the signal, the challenge was first to extract the message from the raw data of the radio signal, and secondly to decode it. In just 10 days, a community of 5000 citizen scientists gathered online and managed to extract the signal. The second task took longer and required some visionary minds. US citizens Ken and Keli Chaffin cracked the code following their intuition and running simulations for hours and days on end. The father and daughter team discovered that the message contained movement, suggesting some sort of cellular formation and life forms. Amino acids and proteins are the building blocks of life. Now that the cryptic signal has been deciphered, the quest for meaning begins. The interpretation of the message, like any art piece, remains open. Daniela crafted the message with a small group of astronomers and computer scientists, with support from ESA, the SETI Institute and the Green Bank Observatory. The artist and collaborators behind the project are now taking a step back and witnessing how citizen scientists are shaping the challenge on their own. Read more of this story at Slashdot.

BBC Interviews Charley Kline and Bill Duvall, Creators of Arpanet

The BBC interviewed scientists Charley Kline and Bill Duvall 55 years after the first communications were made over a system called Arpanet, short for the Advanced Research Projects Agency Network. "Kline and Duvall were early inventors of networking, networks that would ultimately lead to what is today the Internet," writes longtime Slashdot reader dbialac. "Duvall had basic ideas what might come of the networks, but they had no idea of how much of a phenomenon it would turn into." Here's an excerpt from the interview: BBC: What did you expect Arpanet to become? Duvall: "I saw the work we were doing at SRI as a critical part of a larger vision, that of information workers connected to each other and sharing problems, observations, documents and solutions. What we did not see was the commercial adoption nor did we anticipate the phenomenon of social media and the associated disinformation plague. Although, it should be noted, that in [SRI computer scientist] Douglas Engelbart's 1962 treatise describing the overall vision, he notes that the capabilities we were creating would trigger profound change in our society, and it would be necessary to simultaneously use and adapt the tools we were creating to address the problems which would arise from their use in society." What aspects of the internet today remind you of Arpanet? Duvall: Referring to the larger vision which was being created in Engelbart's group (the mouse, full screen editing, links, etc.), the internet today is a logical evolution of those ideas enhanced, of course, by the contributions of many bright and innovative people and organisations. Kline: The ability to use resources from others. That's what we do when we use a website. We are using the facilities of the website and its programs, features, etc. And, of course, email. The Arpanet pretty much created the concept of routing and multiple paths from one site to another. That got reliability in case a communication line failed. It also allowed increases in communication speeds by using multiple paths simultaneously. Those concepts have carried over to the internet. Today, the site of the first internet transmission at UCLA's Boetler Hally Room 3420 functions as a monument to technology history (Credit: Courtesy of UCLA) As we developed the communications protocols for the Arpanet, we discovered problems, redesigned and improved the protocols and learned many lessons that carried over to the Internet. TCP/IP [the basic standard for internet connection] was developed both to interconnect networks, in particular the Arpanet with other networks, and also to improve performance, reliability and more. How do you feel about this anniversary? Kline: That's a mix. Personally, I feel it is important, but a little overblown. The Arpanet and what sprang from it are very important. This particular anniversary to me is just one of many events. I find somewhat more important than this particular anniversary were the decisions by Arpa to build the Network and continue to support its development. Duvall: It's nice to remember the origin of something like the internet, but the most important thing is the enormous amount of work that has been done since that time to turn it into what is a major part of societies worldwide. Read more of this story at Slashdot.

GitHub Copilot Moves Beyond OpenAI Models To Support Claude 3.5, Gemini

GitHub Copilot will switch from using exclusively OpenAI's GPT models to a multi-model approach, adding Anthropic's Claude 3.5 Sonnet and Google's Gemini 1.5 Pro. Ars Technica reports: First, Anthropic's Claude 3.5 Sonnet will roll out to Copilot Chat's web and VS Code interfaces over the next few weeks. Google's Gemini 1.5 Pro will come a bit later. Additionally, GitHub will soon add support for a wider range of OpenAI models, including GPT o1-preview and o1-mini, which are intended to be stronger at advanced reasoning than GPT-4, which Copilot has used until now. Developers will be able to switch between the models (even mid-conversation) to tailor the model to fit their needs -- and organizations will be able to choose which models will be usable by team members. The new approach makes sense for users, as certain models are better at certain languages or types of tasks. "There is no one model to rule every scenario," wrote [GitHub CEO Thomas Dohmke]. "It is clear the next phase of AI code generation will not only be defined by multi-model functionality, but by multi-model choice." It starts with the web-based and VS Code Copilot Chat interfaces, but it won't stop there. "From Copilot Workspace to multi-file editing to code review, security autofix, and the CLI, we will bring multi-model choice across many of GitHub Copilot's surface areas and functions soon," Dohmke wrote. There are a handful of additional changes coming to GitHub Copilot, too, including extensions, the ability to manipulate multiple files at once from a chat with VS Code, and a preview of Xcode support. GitHub also introduced "Spark," a natural language-based app development tool that enables both non-coders and coders to create and refine applications using conversational prompts. It's currently in an early preview phase, with a waitlist available for those who are interested. Read more of this story at Slashdot.

More Than a Quarter of New Code At Google Is Generated By AI

Google has integrated AI deeply across its operations, with over 25% of its new code generated by AI. CEO Sundar Pichai announced the milestone during the company's third quarter 2024 earnings call. The Verge reports: AI is helping Google make money as well. Alphabet reported $88.3 billion in revenue for the quarter, with Google Services (which includes Search) revenue of $76.5 billion, up 13 percent year-over-year, and Google Cloud (which includes its AI infrastructure products for other companies) revenue of $11.4 billion, up 35 percent year-over-year. Operating incomes were also strong. Google Services hit $30.9 billion, up from $23.9 billion last year, and Google Cloud hit $1.95 billion, significantly up from last year's $270 million. "In Search, our new AI features are expanding what people can search for and how they search for it," CEO Sundar Pichai says in a statement. "In Cloud, our AI solutions are helping drive deeper product adoption with existing customers, attract new customers and win larger deals. And YouTube's total ads and subscription revenues surpassed $50 billion over the past four quarters for the first time." Read more of this story at Slashdot.

SoftBank’s Son Says Artificial Super Intelligence To Exist By 2035

An anonymous reader quotes a report from Reuters: SoftBank CEO Masayoshi Son reiterated his belief in the coming of artificial super intelligence (ASI) on Tuesday, saying it would require hundreds of billions of dollars of investment to realize. Artificial super intelligence will be 10,000 times smarter than a human brain and will exist by 2035, Son told an audience of global business, technology and finance leaders at a conference in Riyadh, Saudi Arabia. Son said he is saving up funds "so I can make the next big move," but did not provide any details as to his investment plans. He predicted that generative AI will require $900 trillion dollars in cumulative capital expenditure in data centers and chips in the future, adding that he thought chip maker Nvidia was undervalued on this basis. Read more of this story at Slashdot.

Local Privilege Escalation Vulnerability Affecting X.Org Server For 18 Years

Phoronix's Michael Larabel reports: CVE-2024-9632 was made public today as the latest security vulnerability affecting the X.Org Server. The CVE-2024-9632 security issue has been present in the codebase now for 18 years and can lead to local privilege escalation. Introduced in the X.Org Server 1.1.1 release back in 2006, CVE-2024-9632 affects the X.Org Server as well as XWayland too. By providing a modified bitmap to the X.Org Server, a heap-based buffer overflow privilege escalation can occur. This security issue is within _XkbSetCompatMap() and stems from not updating the heap size properly and can lead to local privilege escalation if the server is run as root or as a remote code execution with X11 over SSH. You can read the security advisory announcement here. Read more of this story at Slashdot.

OpenAI Builds First Chip With Broadcom and TSMC, Scales Back Foundry Ambition

OpenAI is partnering with Broadcom and TSMC to design its first in-house AI chip while supplementing its infrastructure with AMD chips, aiming to diversify its reliance on Nvidia GPUs. "The company has dropped the ambitious foundry plans for now due to the costs and time needed to build a network, and plans instead to focus on in-house chip design effort," adds Reuters. From the report: OpenAI has been working for months with Broadcom to build its first AI chip focusing on inference, according to sources. Demand right now is greater for training chips, but analysts have predicted the need for inference chips could surpass them as more AI applications are deployed. Broadcom helps companies including Alphabet unit Google fine-tune chip designs for manufacturing and also supplies parts of the design that help move information on and off the chips quickly. This is important in AI systems where tens of thousands of chips are strung together to work in tandem. OpenAI is still determining whether to develop or acquire other elements for its chip design, and may engage additional partners, said two of the sources. The company has assembled a chip team of about 20 people, led by top engineers who have previously built Tensor Processing Units (TPUs) at Google, including Thomas Norrie and Richard Ho. Sources said that through Broadcom, OpenAI has secured manufacturing capacity with Taiwan Semiconductor Manufacturing Company to make its first custom-designed chip in 2026. They said the timeline could change. Currently, Nvidia's GPUs hold over 80% market share. But shortages and rising costs have led major customers like Microsoft, Meta, and now OpenAI, to explore in-house or external alternatives. Read more of this story at Slashdot.