What Happened After Remote Workers Were Offered $10,000 to Move to Tulsa?

Five years ago remote workers were offered $10,0000 to move to Tulsa, Oklahoma for at least a year. Since then roughly 3,300 have accepted the offer, according to the New York TImes. [Alternate URL here.] But more importantly, now researchers are looking at the results: Their research, released this month, surveyed 1,248 people — including 411 who had participated in Tulsa Remote and others who were accepted but didn't move or weren't accepted but had applied to the program — and found that remote workers who moved to Tulsa saved an average of $25,000 more on annual housing costs than the group that was chosen but didn't move... Nearly three-quarters of participants who have completed the program are still living in Tulsa. The program brings them together for farm-to-table dinners, movie nights and local celebrity lectures to help build community, given that none have offices to commute to. The article says every year the remote workers contribute $14.9 million in state income taxes and $5.8 million in sales taxes (more than offsetting the $33 million spent over the last five years). And additional benefits could be even greater. "We know that for every dollar we've spent on the incentive, there's been about a $13 return on that investment to the city," the program's managing director told Fortune — pointing out that the remote workers have an average salary of $100,000. (500 of the 3,300 even bought homes...) The Tulsa-based George Kaiser Family Foundation — which provides the $10,000 awards — told the New York Times it will continue funding the program "so long as it demonstrates to be a community-enhancing opportunity." And with so much of the population now able to work remotely, the lead author on the latest study adds that "Every heartland mayor should pay attention to this..." Read more of this story at Slashdot.

Python Overtakes JavaScript on GitHub, Annual Survey Finds

GitHub released its annual "State of the Octoverse" report this week. And while "Systems programming languages, like Rust, are also on the rise... Python, JavaScript, TypeScript, and Java remain the most widely used languages on GitHub." In fact, "In 2024, Python overtook JavaScript as the most popular language on GitHub." They also report usage of Jupyter Notebooks "skyrocketed" with a 92% jump in usage, which along with Python's rise seems to underscore "the surge in data science and machine learning on GitHub..." We're also seeing increased interest in AI agents and smaller models that require less computational power, reflecting a shift across the industry as more people focus on new use cases for AI... While the United States leads in contributions to generative AI projects on GitHub, we see more absolute activity outside the United States. In 2024, there was a 59% surge in the number of contributions to generative AI projects on GitHub and a 98% increase in the number of projects overall — and many of those contributions came from places like India, Germany, Japan, and Singapore... Notable growth is occurring in India, which is expected to have the world's largest developer population on GitHub by 2028, as well as across Africa and Latin America... [W]e have seen greater growth outside the United States every year since 2013 — and that trend has sped up over the past few years. Last year they'd projected India would have the most developers on GitHub #1 by 2027, but now believe it will happen a year later. This year's top 10? 1. United States 2. India 3. China 4. Brazil 5. United Kingdom 6. Russia 7. Germany 8. Indonesia 9. Japan 10. Canada Interestingly, the UK's population ranks #21 among countries of the world, while Germany ranks #19, and Canada ranks #36.) GitHub's announcement argues the rise of non-English, high-population regions "is notable given that it is happening at the same time as the proliferation of generative AI tools, which are increasingly enabling developers to engage with code in their natural language." And they offer one more data point: GitHub's For Good First Issue is a curated list of Digital Public Goods that need contributors, connecting those projects with people who want to address a societal challenge and promote sustainable development... Significantly, 34% of contributors to the top 10 For Good Issue projects... made their first contribution after signing up for GitHub Copilot. There's now 518 million projects on GitHub — with a year-over-year growth of 25%... Read more of this story at Slashdot.

Will Charging Cables Ever Have a Single Standardzed Port?

The Atlantic complains that our chaos of different plug types "was supposed to end, with USB-C as our savior." But part of the problem is what they call "the second circle of our cable hell: My USB-C may not be the same as yours. And the USB-C you bought two years ago may not be the same as the one you got today. And that means it might not do what you now assume it can." A lack of standardization is not the problem here. The industry has designed, named, and rolled out a parade of standards that pertain to USB and all its cousins. Some of those standards live inside other standards. For example, USB 3.2 Gen 1 is also known as USB 3.0, even though it's numbered 3.2. (What? Yes.) And both of these might be applied to cables with USB-A connectors, or USB-B, or USB-Micro B, or — why not? — USB-C. The variations stretch on and on toward the horizon. Hope persists that someday, eventually, this hell can be escaped — and that, given sufficient standardization, regulatory intervention, and consumer demand, a winner will emerge in the battle of the plugs. But the dream of having a universal cable is always and forever doomed, because cables, like humankind itself, are subject to the curse of time, the most brutal standard of them all. At any given moment, people use devices they bought last week alongside those they've owned for years; they use the old plugs in rental cars or airport-gate-lounge seats; they buy new gadgets with even better capabilities that demand new and different (if similar-looking) cables. Even if Apple puts a USB-C port in every new device, and so does every other manufacturer, that doesn't mean that they will do everything you will expect cables to do in the future. Inevitably, you will find yourself needing new ones. Back in 1998, the New York Times told me, "If you make your move to U.S.B. now, you can be sure that your new devices will have a port to plug into." I was ready! I'm still ready. But alas, a port to plug into has never been enough. Obligatory XKCD. Read more of this story at Slashdot.

Researchers Develop New Method That Tricks Cancer Cells Into Killing Themselves

Our bodies divest themselves of 60 billion cells every day through a natural process called "apoptosis". So Stanford medicine researchers are developing a new approach to cancer therapy that could "trick cancer cells into disposing of themselves," according to announcement from Stanford's medical school: Their method accomplishes this by artificially bringing together two proteins in such a way that the new compound switches on a set of cell death genes... One of these proteins, BCL6, when mutated, drives the blood cancer known as diffuse large cell B-cell lymphoma... [It] sits on DNA near apoptosis-promoting genes and keeps them switched off, helping the cancer cells retain their signature immortality. The researchers developed a molecule that tethers BCL6 to a protein known as CDK9, which acts as an enzyme that catalyzes gene activation, in this case, switching on the set of apoptosis genes that BCL6 normally keeps off. "The idea is, Can you turn a cancer dependency into a cancer-killing signal?" asked Nathanael Gray, PhD, co-senior author with Crabtree, the Krishnan-Shah Family Professor and a chemical and systems biology professor. "You take something that the cancer is addicted to for its survival and you flip the script and make that be the very thing that kills it...." When the team tested the molecule in diffuse large cell B-cell lymphoma cells in the lab, they found that it indeed killed the cancer cells with high potency. They also tested the molecule in healthy mice and found no obvious toxic side effects, even though the molecule killed off a specific category of of the animals' healthy B cells, a kind of immune cell, which also depend on BCL6. They're now testing the compound in mice with diffuse large B-cell lymphoma to gauge its ability to kill cancer in a living animal. Because the technique relies on the cells' natural supply of BCL6 and CDK9 proteins, it seems to be very specific for the lymphoma cells — the BCL6 protein is found only in this kind of lymphoma cell and in one specific kind of B cell. The researchers tested the molecule in 859 different kinds of cancer cells in the lab; the chimeric compound killed only diffuse large cell B-cell lymphoma cells. Scientists have been trying to shut down cancer-driving proteins, one of the researchers says, but instead, "we're trying to use them to turn signaling on that, we hope, will prove beneficial for treatment." The two researchers have co-founded the biotech startup Shenandoah Therapeutics, which "aims to further test this molecule and a similar, previously developed molecule," according to the article, "in hopes of gathering enough pre-clinical data to support launching clinical trials of the compounds. "They also plan to build similar molecules that could target other cancer-driving proteins..." Read more of this story at Slashdot.

How a Slice of Cheese Almost Derailed Europe’s Most Important Rocket Test

Long-time Slashdot reader schwit1 shared this report from the blog Interesting Engineering: A team of students made history this month by performing Europe's first rocket hop test. Those who have followed SpaceX's trajectory will know hop tests are a vital stepping stone for a reusable rocket program, as they allow engineers to test their rocket's landing capabilities. Impressively, no private company or space agency in Europe had ever performed a rocket hop test before. Essentially, a group of students performed one of the most important rocket tests in the history of European rocketry. However, the remarkable nature of this story doesn't end there. Amazingly, the whole thing was almost derailed by a piece of cheese. A slice of Gruyère the team strapped to their rocket's landing legs almost caused the rocket to spin out of control. Thankfully, disaster was averted, and the historic hopper didn't end up as rocket de-Brie. Read more of this story at Slashdot.

Leaked Training Shows Doctors In New York’s Biggest Hospital System Using AI

Slashdot reader samleecole shared this report from 404 Media: Northwell Health, New York State's largest healthcare provider, recently launched a large language model tool that it is encouraging doctors and clinicians to use for translation, sensitive patient data, and has suggested it can be used for diagnostic purposes, 404 Media has learned. Northwell Health has more than 85,000 employees. An internal presentation and employee chats obtained by 404 Media shows how healthcare professionals are using LLMs and chatbots to edit writing, make hiring decisions, do administrative tasks, and handle patient data. In the presentation given in August, Rebecca Kaul, senior vice president and chief of digital innovation and transformation at Northwell, along with a senior engineer, discussed the launch of the tool, called AI Hub, and gave a demonstration of how clinicians and researchers—or anyone with a Northwell email address—can use it... AI Hub can be used for "clinical or clinical adjacent" tasks, as well as answering questions about hospital policies and billing, writing job descriptions and editing writing, and summarizing electronic medical record excerpts and inputting patients' personally identifying and protected health information. The demonstration also showed potential capabilities that included "detect pancreas cancer," and "parse HL7," a health data standard used to share electronic health records. The leaked presentation shows that hospitals are increasingly using AI and LLMs to streamlining administrative tasks, and shows that some are experimenting with or at least considering how LLMs would be used in clinical settings or in interactions with patients. Read more of this story at Slashdot.

New Study Suggests Oceans Absorb More CO2 Than Previously Thought

Long-time Slashdot reader schwit1 shared this story from SciTechDaily: New research confirms that subtle temperature differences at the ocean surface, known as the "ocean skin," increase carbon dioxide absorption. This discovery, based on precise measurements, suggests global oceans absorb 7% more CO2 than previously thought, aiding climate understanding and carbon assessments... Until now, global estimates of air-sea CO2 fluxes typically ignore the importance of temperature differences in the near-surface layer... Dr Gavin Tilstone, from Plymouth Marine Laboratory (PML), said: "This discovery highlights the intricacy of the ocean's water column structure and how it can influence CO2 draw-down from the atmosphere. Understanding these subtle mechanisms is crucial as we continue to refine our climate models and predictions. It underscores the ocean's vital role in regulating the planet's carbon cycle and climate." Read more of this story at Slashdot.

After Silence, NASA’s Voyager Finally Phones Home – With a Device Unused Since 1981

Somewhere off in interstellar space, 15.4 billion miles away from Earth, NASA's 47-year-old Voyager "recently went quiet," reports Mashable. The probe "shut off its main radio transmitter for communicating with mission control..." Voyager's problem began on October 16, when flight controllers sent the robotic explorer a somewhat routine command to turn on a heater. Two days later, when NASA expected to receive a response from the spacecraft, the team learned something tripped Voyager's fault protection system, which turned off its X-band transmitter. By October 19, communication had altogether stopped. The flight team was not optimistic. However, Voyager 1 was equipped with a backup that relies on a different, albeit significantly fainter, frequency. No one knew if the second radio transmitter could still work, given the aging spacecraft's extreme distance. Days later, engineers with the Deep Space Network, a system of three enormous radio dish arrays on Earth, found the signal whispering back over the S-band transmitter. The device hadn't been used since 1981, according to NASA. "The team is now working to gather information that will help them figure out what happened and return Voyager 1 to normal operations," NASA said in a recent mission update. It's been more than 12 years since Voyager entered interstellar space, the article points out. And interstellar space "is a high-radiation environment that nothing human-made has ever flown in before. "That means the only thing the teams running the old probes can count on are surprises." Read more of this story at Slashdot.

Millions of U.S. Cellphones Could Be Vulnerable to Chinese Government Surveillance

Millions of U.S. cellphone users could be vulnerable to Chinese government surveillance, warns a Washington Post columnist, "on the networks of at least three major U.S. carriers." They cite six current or former senior U.S. officials, all of whom were briefed about the attack by the U.S. intelligence community. The Chinese hackers, who the United States believes are linked to Beijing's Ministry of State Security, have burrowed inside the private wiretapping and surveillance system that American telecom companies built for the exclusive use of U.S. federal law enforcement agencies — and the U.S. government believes they likely continue to have access to the system.... The U.S. government and the telecom companies that are dealing with the breach have said very little publicly about it since it was first detected in August, leaving the public to rely on details trickling out through leaks... The so-called lawful-access system breached by the Salt Typhoon hackers was established by telecom carriers after the terrorist attacks of Sept. 11, 2001, to allow federal law enforcement officials to execute legal warrants for records of Americans' phone activity or to wiretap them in real time, depending on the warrant. Many of these cases are authorized under the Foreign Intelligence Surveillance Act (FISA), which is used to investigate foreign spying that involves contact with U.S. citizens. The system is also used for legal wiretaps related to domestic crimes. It is unknown whether hackers were able to access records about classified wiretapping operations, which could compromise federal criminal investigations and U.S. intelligence operations around the world, multiple officials told me. But they confirmed the previous reporting that hackers were able to both listen in on phone calls and monitor text messages. "Right now, China has the ability to listen to any phone call in the United States, whether you are the president or a regular Joe, it makes no difference," one of the hack victims briefed by the FBI told me. "This has compromised the entire telecommunications infrastructure of this country." The Wall Street Journal first reported on Oct. 5 that China-based hackers had penetrated the networks of U.S. telecom providers and might have penetrated the system that telecom companies operate to allow lawful access to wiretapping capabilities by federal agencies... [After releasing a short statement], the FBI notified 40 victims of Salt Typhoon, according to multiple officials. The FBI informed one person who had been compromised that the initial group of identified targets included six affiliated with the Trump campaign, this person said, and that the hackers had been monitoring them as recently as last week... "They had live audio from the president, from JD, from Jared," the person told me. "There were no device compromises, these were all real-time interceptions...." [T]he duration of the surveillance is believed to date back to last year. Several officials told the columnist that the cyberattack also targetted senior U.S. government officials and top business leaders — and that even more compromised targets are being discovered. At this point, "Multiple officials briefed by the investigators told me the U.S. government does not know how many people were targeted, how many were actively surveilled, how long the Chinese hackers have been in the system, or how to get them out." But the article does include this quote from U.S. Senate Intelligence Committee chairman Mark Warner. "It is much more serious and much worse than even what you all presume at this point." One U.S. representative suggested Americans rely more on encrypted apps. The U.S. is already investigating — but while researching the article, the columnist writes, "The National Security Council declined to comment, and the FBI did not respond to a request for comment..." They end with this recommendation. "If millions of Americans are vulnerable to Chinese surveillance, they have a right to know now." Read more of this story at Slashdot.

New ‘Open Source AI Definition’ Criticized for Not Opening Training Data

Long-time Slashdot reader samj — also a long-time Debian developer — tells us there's some opposition to the newly-released Open Source AI definition. He calls it a "fork" that undermines the original Open Source definition (which was originally derived from Debian's Free Software Guidelines, written primarily by Bruce Perens), and points us to a new domain with a petition declaring that instead Open Source shall be defined "solely by the Open Source Definition version 1.9. Any amendments or new definitions shall only be recognized with clear community consensus via an open and transparent process." This move follows some discussion on the Debian mailing list: Allowing "Open Source AI" to hide their training data is nothing but setting up a "data barrier" protecting the monopoly, disabling anybody other than the first party to reproduce or replicate an AI. Once passed, OSI is making a historical mistake towards the FOSS ecosystem. They're not the only ones worried about data. This week TechCrunch noted an August study which "found that many 'open source' models are basically open source in name only. The data required to train the models is kept secret, the compute power needed to run them is beyond the reach of many developers, and the techniques to fine-tune them are intimidatingly complex. Instead of democratizing AI, these 'open source' projects tend to entrench and expand centralized power, the study's authors concluded." samj shares the concern about training data, arguing that training data is the source code and that this new definition has real-world consequences. (On a personal note, he says it "poses an existential threat to our pAI-OS project at the non-profit Kwaai Open Source Lab I volunteer at, so we've been very active in pushing back past few weeks.") And he also came up with a detailed response by asking ChatGPT. What would be the implications of a Debian disavowing the OSI's Open Source AI definition? ChatGPT composed a 7-point, 14-paragraph response, concluding that this level of opposition would "create challenges for AI developers regarding licensing. It might also lead to a fragmentation of the open-source community into factions with differing views on how AI should be governed under open-source rules." But "Ultimately, it could spur the creation of alternative definitions or movements aimed at maintaining stricter adherence to the traditional tenets of software freedom in the AI age." However the official FAQ for the new Open Source AI definition argues that training data "does not equate to a software source code." Training data is important to study modern machine learning systems. But it is not what AI researchers and practitioners necessarily use as part of the preferred form for making modifications to a trained model.... [F]orks could include removing non-public or non-open data from the training dataset, in order to train a new Open Source AI system on fully public or open data... [W]e want Open Source AI to exist also in fields where data cannot be legally shared, for example medical AI. Laws that permit training on data often limit the resharing of that same data to protect copyright or other interests. Privacy rules also give a person the rightful ability to control their most sensitive information — like decisions about their health. Similarly, much of the world's Indigenous knowledge is protected through mechanisms that are not compatible with later-developed frameworks for rights exclusivity and sharing. Read on for the rest of their response... Read more of this story at Slashdot.