Keeping up with an industry that evolves as quickly as ai is a difficult task. So until an ai can do it for you, here's a helpful summary of recent stories in the world of machine learning, along with notable research and experiments that we didn't cover on their own.
This week in ai, OpenAI signed up its first higher education customer: Arizona State University.
ASU will collaborate with OpenAI to bring ChatGPT, OpenAI's ai-powered chatbot, to the university's researchers, staff and faculty, hosting an open challenge in February to invite faculty and staff to submit ideas for ways to use ChatGPT.
The OpenAI-ASU agreement illustrates the changing views around ai in education as the technology advances faster than curricula can keep up. Last summer, schools and universities were quick to ban ChatGPT for fear of plagiarism and misinformation. Since then, some have invested their bans, while others have begun organizing workshops on GenAI tools and their learning potential.
The debate over GenAI's role in education is not likely to be resolved anytime soon. But, for what it's worth, I find myself increasingly in the partisan camp.
Yes, GenAI is a bad summarizer. It is partial and toxic. Invent things. But it can also be used for good.
Consider how a tool like ChatGPT could help students who are struggling with an assignment. It could explain a math problem step by step or generate an essay outline. Or it could reveal the answer to a question that would take Google much longer.
Now, there are reasonable concerns about cheating, or at least what might be considered cheating within the confines of current curricula. I have heard anecdotally of students, particularly college students, using ChatGPT to write large amounts of articles and essay questions on take-home exams.
This is not a new problem: paid essay writing services have been around for a long time. But ChatGPT dramatically lowers the barrier to entry, some educators argue.
there is tech/chatgpt-did-not-increase-cheating-in-high-schools/index.html”>evidence suggest that these fears are exaggerated. But putting that aside for a moment, I propose that we take a step back and consider what drives students to cheat in the first place. Students are often rewarded for their grades, not their effort or understanding. The incentive structure is distorted. Is it any wonder, then, that children view schoolwork as boxes to check rather than opportunities to learn?
So let's let students have GenAI and let's educators test ways to leverage this new technology to reach students where they are. I don't have much hope for drastic educational reform. But perhaps GenAI will serve as a launching pad for lesson plans that get kids excited about topics they would never have explored before.
Here are some other notable ai stories from the past few days:
Microsoft Reading Tutor: Microsoft this week created Reading Coach, its artificial intelligence tool that gives students personalized reading practice, ai” target=”_blank” rel=”noopener” data-mrf-link=”https://educationblog.microsoft.com/en-us/2024/01/unlocking-productivity-and-personalizing-learning-with-ai“>available at no cost to anyone with a Microsoft account.
Algorithmic transparency in music: EU regulators are calling for laws mandating greater algorithmic transparency on music streaming platforms. They also want to address ai-generated music and deepfakes.
NASA robots: NASA recently showed off a self-assembling robotic structure that, Devin writes, could become a crucial part of off-planet movement.
Samsung Galaxy, now powered by ai: At Samsung's Galaxy S24 launch event, the company unveiled the various ways ai could improve the smartphone experience, including live translation of calls, responses and suggested actions and a new way to perform Google searches. through gestures.
DeepMind Geometry Solver: DeepMind, Google's ai R&D lab, this week unveiled AlphaGeometry, an ai system that the lab says can solve as many geometry problems as the average International Mathematics Olympiad gold medalist.
OpenAI and crowdsourcing: In other OpenAI news, the startup is forming a new team, Collective Alignment, to implement ideas from the public on how to ensure its future ai models “align with the values of humanity.” At the same time, it is changing its policy to allow military applications of its technology. (Talk about mixed messages.)
A Pro plan for Copilot: Microsoft launched a consumer-focused paid plan for Copilot, the umbrella brand for its portfolio of ai-powered content generation technologies, and relaxed eligibility requirements for enterprise-level Copilot offerings. It also launched new features for free users, including a Copilot smartphone app.
Misleading models: Most humans learn the skill of deceiving other humans. So can ai models learn the same? Yes, the answer seems to be… and, frighteningly, they are exceptionally good at it. according to a new study from ai startup Anthropic.
Tesla Stage Robotics Demonstration: Elon Musk's Tesla humanoid robot Optimus is doing more things: this time folding a T-shirt on a table at a development facility. But it turns out that the robot is not autonomous at all at the current stage.
More machine learning
One of the things holding back broader applications of things like ai-powered satellite analysis is the need to train models to recognize what can be a fairly esoteric form or concept. Identify the outline of a building: easy. Identifying debris fields after a flood: not that easy! Swiss EPFL researchers hope to make this easier with ai-program-classifies-objects-in-satelli/”>a program they call METEOR.
“The problem in environmental science is that it is often impossible to get a data set large enough to train ai programs for our research needs,” said Marc Rußwurm, one of the project leaders. His new training structure allows you to train a recognition algorithm for a new task with just four or five representative images. The results are comparable to models trained with much more data. His plan is to graduate the system from the lab to a product with a user interface for everyday people (i.e. non-ai researchers) to use. You can read the article they published here.
Going in the other direction (creating images) is a field of intense research, as doing so efficiently could reduce the computational load on generative ai platforms. The most common method is called diffusion, which gradually refines a pure noise source into a target image. The Los Alamos National Laboratory has ai-breakthrough/”>a new approach they call Blackout Diffusionwhich instead starts from a pure black image.
That eliminates the need for noise to begin with, but the real advance is that the framework is carried out in “discrete spaces” rather than continuous ones, which greatly reduces the computational load. They say it works well and at a lower cost, but it's definitely far from a widespread release. I'm not qualified to evaluate the effectiveness of this approach (the math is far beyond me), but national labs don't tend to overhype something like this for no reason. I will ask the researchers for more information.
ai models are emerging across the natural sciences, where their ability to separate signal from noise produces new insights and saves money on graduate students' hours of data entry.
ai-helps-protect-australias-forests-green-triangle-from-fires/103277424″>Australia is applying Pano ai's wildfire detection technology in its “Green Triangle”, a major forest region. I love seeing startups being used in this way – not only could it help prevent fires, but it also produces valuable data for forestry and natural resource authorities. Every minute counts wildfires (or wildfires, as they call them there), so early notifications could make the difference between tens and thousands of acres of damage.
Los Alamos gets a second mention (I just realized while going over my notes) as they are also working on a new ai model for ai-permafrost-maps/”>estimating permafrost decline. Existing models for this are low resolution and predict permafrost levels in chunks of about 1/3 square mile. This is certainly useful, but more detail gives less misleading results for areas that might appear to be 100% permafrost on a larger scale, but are clearly less than that when you look closer. As climate change progresses, these measurements must be accurate!
Biologists are finding interesting ways to test and use ai or ai-adjacent models in the many subfields of that domain. At a recent conference ai-to-study-the-behavior-of-bees-zebras-insects-and-other-creatures/”>written by my friends at GeekWireTools for tracking zebras, insects and even individual cells were shown in poster sessions.
And from a physics and chemistry standpoint, Argonne NL researchers are looking for the best way to package hydrogen for use as fuel. Free hydrogen is notoriously difficult to contain and control, so binding it to a special helper molecule keeps it docile. The problem is that hydrogen bonds to virtually everything, so there are billions of possibilities for helper molecules. But classifying large data sets is a specialty of machine learning.
“We were looking for organic liquid molecules that hold onto hydrogen for a long time, but not so tightly that they can't be easily removed when needed,” said Hassan Harb of the project. ai-helps-whittle-down-candidates-for-hydrogen-carriers-in-liquid-form-from-billions-to-about-40″>Their system classified 160 billion molecules., and by using an ai detection method they were able to examine 3 million per second, so the entire final process took about half a day. (Of course, they were using a pretty big supercomputer.) They identified 41 of the best candidates, which is a negligible number for the experimental team to test in the lab. Hopefully they find something useful – I don't want to have to deal with hydrogen leaks in my next car.
However, to end with a warning: a study in science found that the machine learning models used to predict how patients would respond to certain treatments were very accurate… within the sample group on which they were trained. In other cases, they basically didn't help at all. This doesn't mean they shouldn't be used, but it supports what many people in the business have been saying: ai is not a silver bullet and should be thoroughly tested in every new population and application in which it is applied. to.