12 December 2022

New Book: No Miracles Needed - How Today's Technology Can Save Our Climate and Clean Our Air by Mark Z. Jacobson

Image: Cambridge University Press
Many of us have been working on addressing Climate Change and may already be aware of Mark Jacobson's work as he's a professor of Civil and Environmental Engineering and Director of the Atmosphere/Energy Program at Stanford University. His work forms the scientific basis of the Green New Deal and many laws and commitments for cities, states and countries to transition to 100 percent renewable electricity and heat generation.
No Miracles Needed - How Today's Technology Can Save Our Climate and Clean Our Air by Mark Z. Jacobson reminds us that the world needs to turn away from fossil fuels and use clean, renewable sources of energy as soon as we can. Failure to do so will cause catastrophic climate damage sooner than you might think, leading to loss of biodiversity and economic and political instability. We still have time to save the planet without resorting to 'miracle' technologies. We need to wave goodbye to outdated technologies, such as natural gas and carbon capture, and repurpose the technologies that we already have at our disposal. We can use existing technologies to harness, store, and transmit energy from wind, water, and solar sources to ensure reliable electricity, heat supplies, and energy security. Find out what you can do to improve the health, climate, and economic state of our planet. Together, we can solve the climate crisis, eliminate air pollution and safely secure energy supplies for everyone.

The Book:
- Lays out the framework for how to solve the climate, air pollution and energy security problems of our times, including an honest analysis of what we should not be doing 

- Shares up-to-date information on the technologies available to solve these problems, providing actionable solutions to help fight the climate crisis 

- Provides suggestions on what individuals, communities and nations can do to solve energy issues, helping the reader take steps to save our planet 

- Explores the implications of the policies needed to fight climate change, providing insights into the current landscape and the solutions available You can purchase the book here. https://www.cambridge.org/us/academic/subjects/earth-and-environmental-science/climatology-and-climate-change/no-miracles-needed-how-todays-technology-can-save-our-climate-and-clean-our-air

18 October 2022

Scaling AI Summit

The Scaling AI Summit is happening in person in Silicon Valley Nov 30 – Dec 1. Here’s more info on the event: https://scalingaisummit.com/events/scaling-ai-summit See you there!

07 March 2022

New Book: Let It Shine: The 6,000-Year Story of Solar Energy

Smart Tech News would like to recommend a NEW Book for our Readers:
Let It Shine: The 6,000-Year Story of Solar Energy by John Perlin paperback edition features a new preface by the author and a foreword by Stanford Professor Mark Z. Jacobson, PhD. The Book:

  • Features a new preface by the author, detailing the explosive growth of solar since the publication of the cloth edition of Let It Shine in 2013
  • The price of solar modules has dropped by 90 percent since 2010, and installation has grown exponentially
  • Solar panels are now the cheapest energy source in history
  • Features a new foreword by Stanford professor Mark Z. Jacobson, a pioneer in developing road maps to transition states and countries to 100 percent clean, renewable energy

What People are saying: 
“The definitive history of solar power.”
— The Financial Times

“John Perlin is the historian of solar energy.
— Daniel Yergin, Pulitzer Prize–winning author of The Prize and The Quest

More about the book:
Even as concern over climate change and dropping prices fuel a massive boom in solar technology, many still think of solar as a twentieth-century wonder. Few realize that the first photovoltaic array appeared on a New York City rooftop in 1884, or that brilliant engineers in France were using solar power in the 1860s to run steam engines, or that in 1901 an ostrich farmer in Southern California used a single solar engine to irrigate three hundred acres of citrus trees. Fewer still know that Leonardo da Vinci planned to make his fortune by building half-mile-long mirrors to heat water, or that the Bronze Age Chinese used hand-size solar- concentrating mirrors to light fires the way we use matches and lighters today. 

Let It Shine is a fully revised and expanded edition of A Golden Thread, John Perlin’s classic history of solar technology, detailing the past forty-plus years of technological developments driving today’s solar renaissance. This unique and compelling compendium of humankind’s solar ideas tells the fascinating story of how our predecessors throughout time, again and again, have applied the sun to better their lives — and how we can too.
You can buy it at your local bookstore and here is one store from San Francisco.
Enjoy!!

30 August 2020

Winners of the 2020 IoT World Awards Announced

 

16 winners of the 2020 IoT World Awards were selected from 600-plus nominees.

Winners of the second-annual IoT World Awards were announced on Wednesday, August 12, 2020, at the Internet of Things World conference. The awards series celebrates innovative individuals, teams, organizations and partnerships that advance IoT technologies, deployments and ecosystems.

 

This year, there were more than 600 nominations for the awards. Entrants spanned the Internet of Things ecosystem, from industrial IoT technology to edge computing and consumer offerings, as well as deployments in several industry sectors. More than 80 companies and individuals were selected as finalists before 16 winners were crowned. 

 

A panel of judges from Omdia, Informa Tech and the industry chose the winners of the core awards based on the entries’ innovation, market traction and other factors. The panel evaluated nominations in January 2020, and the IoT World Awards shortlist was published in February 2020. Two leadership awards were also selected based on votes from almost 5,000 industry professionals.

 

IoT World Today and IoT World Series also introduced the COVID-19 Innovation Award, recognizing companies for their work in combating the novel coronavirus. This award was judged separately by a smaller panel in July 2020.

 

The complete list of winners of this year’s competition follows:

 

Technologies

  • Industrial IoT Solution: Zebra End-to-End Supply Chain Visibility
  • Edge Computing Solution: FogHorn Lightning Edge AI Platform
  • IoT Connectivity Solution: STMicroelectronics STM32WLE
  • IoT Platform: Software AG Cumulocity IoT
  • IoT Security Solution: Darktrace Enterprise Immune System
  • Consumer IoT Solution: AWS IoT for Connected Home

 

Deployments

  • Manufacturing IoT Deployment: AGCO Component Manufacturing Using a digital execution system from Proceedix
  • Energy IoT Deployment: Saudi Aramco’s camera-based Auto Well Space Out
  • Healthcare IoT Deployment: En-route online point-of-care testing service for the London Ambulance Service
  • Public Sector IoT Deployment: Libelium flexible sensor platform and Terralytix Edge Buoy
  • Consumer IoT Deployment: Kinetic Secure by F-Secure, Windstream and Actiontec

 

Ecosystem Development

  • IoT Partnership of the Year: iBASIS Global Access for Things
  • Startup of the Year: Latent AI: Latent AI Efficient Inference Platform

 

COVID-19 IoT Innovation Award

  • Igor Nexos Intelligent Disinfection System

 

Enterprise Leader of the Year

  • Deanna Kovar, vice president, production and precision agriculture production systems at John Deere

 

IoT Solutions Leader of the Year

  • Aleksander Poniewierski, global IoT leader at EY

08 March 2020

Make AI Explanations Everyone Can Understand

https://www.technologyreview.com/s/615110/why-asking-an-ai-to-explain-itself-can-make-things-worse/

Why asking an AI to explain itself can make things worse

Creating neural networks that are more transparent can lead us to over-trust them. The solution might be to change how they explain themselves.

Jan 29, 2020
Frogger about to speak
MS TECH / GETTY
Upol Ehsan once took a test ride in an Uber self-driving car. Instead of fretting about the empty driver’s seat, anxious passengers were encouraged to watch a “pacifier” screen that showed a car’s-eye view of the road: hazards picked out in orange and red, safe zones in cool blue.
For Ehsan, who studies the way humans interact with AI at the Georgia Institute of Technology in Atlanta, the intended message was clear: “Don’t get freaked out—this is why the car is doing what it’s doing.” But something about the alien-looking street scene highlighted the strangeness of the experience rather than reassured. It got Ehsan thinking: what if the self-driving car could really explain itself?
The success of deep learning is due to tinkering: the best neural networks are tweaked and adapted to make better ones, and practical results have outpaced theoretical understanding. As a result, the details of how a trained model works are typically unknown. We have come to think of them as black boxes.
A lot of the time we’re okay with that when it comes to things like playing Go or translating text or picking the next Netflix show to binge on. But if AI is to be used to help make decisions in law enforcement, medical diagnosis, and driverless cars, then we need to understand how it reaches those decisions—and know when they are wrong.
People need the power to disagree with or reject an automated decision, says Iris Howley, a computer scientist at Williams College in Williamstown, Massachusetts. Without this, people will push back against the technology. “You can see this playing out right now with the public response to facial recognition systems,” she says.

Ehsan is part of a small but growing group of researchers trying to make AIs better at explaining themselves, to help us look inside the black box. The aim of so-called interpretable or explainable AI (XAI) is to help people understand what features in the data a neural network is actually learning—and thus whether the resulting model is accurate and unbiased.
One solution is to build machine-learning systems that show their workings: so-called glassbox—as opposed to black-box—AI. Glassbox models are typically much-simplified versions of a neural network in which it is easier to track how different pieces of data affect the model.
“There are people in the community who advocate for the use of glassbox models in any high-stakes setting,” says Jennifer Wortman Vaughan, a computer scientist at Microsoft Research. “I largely agree.” Simple glassbox models can perform as well as more complicated neural networks on certain types of structured data, such as tables of statistics. For some applications that's all you need.
But it depends on the domain. If we want to learn from messy data like images or text, we’re stuck with deep—and thus opaque—neural networks. The ability of these networks to draw meaningful connections between very large numbers of disparate features is bound up with their complexity.
Even here, glassbox machine learning could help. One solution is to take two passes at the data, training an imperfect glassbox model as a debugging step to uncover potential errors that you might want to correct. Once the data has been cleaned up, a more accurate black-box model can be trained.
It's a tricky balance, however. Too much transparency can lead to information overload. In a 2018 study looking at how non-expert users interact with machine-learning tools, Vaughan found that transparent models can actually make it harder to detect and correct the model’s mistakes.
Another approach is to include visualizations that show a few key properties of the model and its underlying data. The idea is that you can see serious problems at a glance. For example, the model could be relying too much on certain features, which could signal bias.
These visualization tools have proved incredibly popular in the short time they’ve been around. But do they really help? In the first study of its kind, Vaughan and her team have tried to find out—and exposed some serious issues.
The team took two popular interpretability tools that give an overview of a model via charts and data plots, highlighting things that the machine-learning model picked up on most in training. Eleven AI professionals were recruited from within Microsoft, all different in education, job roles, and experience. They took part in a mock interaction with a machine-learning model trained on a national income data set taken from the 1994 US census. The experiment was designed specifically to mimic the way data scientists use interpretability tools in the kinds of tasks they face routinely.  
What the team found was striking. Sure, the tools sometimes helped people spot missing values in the data. But this usefulness was overshadowed by a tendency to over-trust and misread the visualizations. In some cases, users couldn’t even describe what the visualizations were showing. This led to incorrect assumptions about the data set, the models, and the interpretability tools themselves. And it instilled a false confidence about the tools that made participants more gung-ho about deploying the models, even when they felt something wasn’t quite right. Worryingly, this was true even when the output had been manipulated to show explanations that made no sense. 
To back up the findings from their small user study, the researchers then conducted an online survey of around 200 machine-learning professionals recruited via mailing lists and social media. They found similar confusion and misplaced confidence.
Worse, many participants were happy to use the visualizations to make decisions about deploying the model despite admitting that they did not understand the math behind them. “It was particularly surprising to see people justify oddities in the data by creating narratives that explained them,” says Harmanpreet Kaur at the University of Michigan, a coauthor on the study. “The automation bias was a very important factor that we had not considered.”
Ah, the automation bias. In other words, people are primed to trust computers. It’s not a new phenomenon. When it comes to automated systems from aircraft autopilots to spell checkers, studies have shown that humans often accept the choices they make even when they are obviously wrong. But when this happens with tools designed to help us avoid this very phenomenon, we have an even bigger problem.
What can we do about it? For some, part of the trouble with the first wave of XAI is that it is dominated by machine-learning researchers, most of whom are expert users of AI systems. Says Tim Miller of the University of Melbourne, who studies how humans use AI systems: “The inmates are running the asylum.”
This is what Ehsan realized sitting in the back of the driverless Uber. It is easier to understand what an automated system is doing—and see when it is making a mistake—if it gives reasons for its actions the way a human would. Ehsan and his colleague Mark Riedl are developing a machine-learning system that automatically generates such rationales in natural language. In an early prototype, the pair took a neural network that had learned how to play the classic 1980s video game Frogger and trained it to provide a reason every time it made a move.

Frogger Explanation
Screenshot of Ehsan and Riedl's Frogger Explanation software
UPOL EHSAN

To do this, they showed the system many examples of humans playing the game while talking out loud about what they were doing. They then took a neural network for translating between two natural languages and adapted it to translate instead between actions in the game and natural-language rationales for those actions. Now, when the neural network sees an action in the game, it “translates” it into an explanation. The result is a Frogger-playing AI that says things like “I’m moving left to stay behind the blue truck” every time it moves. 
Ehsan and Riedl’s work is just a start. For one thing, it is not clear whether a machine-learning system will always be able to provide a natural-language rationale for its actions. Take DeepMind’s board-game-playing AI AlphaZero. One of the most striking features of the software is its ability to make winning moves that most human players would not think to try at that point in a game. If AlphaZero were able to explain its moves, would they always make sense?
Reasons help whether we understand them or not, says Ehsan: “The goal of human-centered XAI is not just to make the user agree to what the AI is saying—it is also to provoke reflection.” Riedl recalls watching the livestream of the tournament match between DeepMind's AI and Korean Go champion Lee Sedol. The commentators were talking about what AlphaGo was seeing and thinking. "That wasn’t how AlphaGo worked," says Riedl. "But I felt that the commentary was essential to understanding what was happening."
What this new wave of XAI researchers agree on is that if AI systems are to be used by more people, those people must be part of the design from the start—and different people need different kinds of explanations. (This is backed up by a new study from Howley and her colleagues, in which they show that people’s ability to understand an interactive or static visualization depends on their education levels.) Think of a cancer-diagnosing AI, says Ehsan. You’d want the explanation it gives to an oncologist to be very different from the explanation it gives to the patient.
Ultimately, we want AIs to explain themselves not only to data scientists and doctors but to police officers using face recognition technology, teachers using analytics software in their classrooms, students trying to make sense of their social-media feeds—and anyone sitting in the backseat of a self-driving car. “We’ve always known that people over-trust technology, and that’s especially true with AI systems,” says Riedl. “The more you say it’s smart, the more people are convinced that it’s smarter than they are.”
Explanations that anyone can understand should help pop that bubble.

AI Is an Energy-Guzzler. We Need to Re-Think Its Design, and Soon By Peter Rejcek Singularity University

https://singularityhub.com/2020/02/28/ai-is-an-energy-guzzler-we-need-to-re-think-its-design-and-soon/
There is a saying that has emerged among the tech set in recent years: AI is the new electricity. The platitude refers to the disruptive power of artificial intelligence for driving advances in everything from transportation to predicting the weather.
Of course, the computers and data centers that support AI’s complex algorithms are very much dependent on electricity. While that may seem pretty obvious, it may be surprising to learn that AI can be extremely power-hungry, especially when it comes to training the models that enable machines to recognize your face in a photo or for Alexa to understand a voice command.
The scale of the problem is difficult to measure, but there have been some attempts to put hard numbers on the environmental cost.
For instance, one paper published on the open-access repository arXiv claimed that the carbon emissions for training a basic natural language processing (NLP) model—algorithms that process and understand language-based data—are equal to the CO2 produced by the average American lifestyle over two years. A more robust model required the equivalent of about 17 years’ worth of emissions.
The authors noted that about a decade ago, NLP models could do the job on a regular commercial laptop. Today, much more sophisticated AI models use specialized hardware like graphics processing units, or GPUs, a chip technology popularized by Nvidia for gaming that also proved capable of supporting computing tasks for AI.
OpenAI, a nonprofit research organization co-founded by tech prophet and profiteer Elon Musk, said that the computing power“used in the largest AI training runs has been increasing exponentially with a 3.4-month doubling time” since 2012. That’s about the time that GPUs started making their way into AI computing systems.

Getting Smarter About AI Chip Design

While GPUs from Nvidia remain the gold standard in AI hardware today, a number of startups have emerged to challenge the company’s industry dominance. Many are building chipsets designed to work more like the human brain, an area that’s been dubbed neuromorphic computing.
One of the leading companies in this arena is Graphcore, a UK startup that has raised more than $450 million and boasts a valuation of $1.95 billion. The company’s version of the GPU is an IPU, which stands for intelligence processing unit.
To build a computer brain more akin to a human one, the big brains at Graphcore are bypassing the precise but time-consuming number-crunching typical of a conventional microprocessor with one that’s content to get by on less precise arithmetic.
The results are essentially the same, but IPUs get the job done much quicker. Graphcore claimed it was able to train the popular BERT NLP model in just 56 hours, while tripling throughput and reducing latency by 20 percent.
An article in Bloomberg compared the approach to the “human brain shifting from calculating the exact GPS coordinates of a restaurant to just remembering its name and neighborhood.”
Graphcore’s hardware architecture also features more built-in memory processing, boosting efficiency because there’s less need to send as much data back and forth between chips. That’s similar to an approach adopted by a team of researchers in Italy that recently published a paper about a new computing circuit.
The novel circuit uses a device called a memristor that can execute a mathematical function known as a regression in just one operation. The approach attempts to mimic the human brain by processing data directly within the memory.
Daniele Ielmini at Politecnico di Milano, co-author of the Science Advances paper, told Singularity Hub that the main advantage of in-memory computing is the lack of any data movement, which is the main bottleneck of conventional digital computers, as well as the parallel processing of data that enables the intimate interactions among various currents and voltages within the memory array.
Ielmini explained that in-memory computing can have a “tremendous impact on energy efficiency of AI, as it can accelerate very advanced tasks by physical computation within the memory circuit.” He added that such “radical ideas” in hardware design will be needed in order to make a quantum leap in energy efficiency and time.

It’s Not Just a Hardware Problem

The emphasis on designing more efficient chip architecture might suggest that AI’s power hunger is essentially a hardware problem. That’s not the case, Ielmini noted.
“We believe that significant progress could be made by similar breakthroughs at the algorithm and dataset levels,” he said.
He’s not the only one.
One of the key research areas at Qualcomm’s AI research lab is energy efficiency. Max Welling, vice president of Qualcomm Technology R&D division, has written about the need for more power-efficient algorithms. He has gone so far as to suggest that AI algorithms will be measured by the amount of intelligence they provide per joule.
One emerging area being studied, Welling wrote, is the use of Bayesian deep learning for deep neural networks.
It’s all pretty heady stuff and easily the subject of a PhD thesis. The main thing to understand in this context is that Bayesian deep learning is another attempt to mimic how the brain processes information by introducing random values into the neural network. A benefit of Bayesian deep learning is that it compresses and quantifies data in order to reduce the complexity of a neural network. In turn, that reduces the number of “steps” required to recognize a dog as a dog—and the energy required to get the right result.
A team at Oak Ridge National Laboratory has previously demonstrated another way to improve AI energy efficiency by converting deep learning neural networks into what’s called a spiking neural network. The researchers spiked their deep spiking neural network (DSNN) by introducing a stochastic process that adds random values like Bayesian deep learning.
The DSNN actually imitates the way neurons interact with synapses, which send signals between brain cells. Individual “spikes” in the network indicate where to perform computations, lowering energy consumption because it disregards unnecessary computations.
The system is being used by cancer researchers to scan millions of clinical reports to unearth insights on causes and treatments of the disease.
Helping battle cancer is only one of many rewards we may reap from artificial intelligence in the future, as long as the benefits of those algorithms outweigh the costs of using them.
“Making AI more energy-efficient is an overarching objective that spans the fields of algorithms, systems, architecture, circuits, and devices,” Ielmini said.

07 March 2020

‘Neutral’ Stanford University Tree Hacks Winner

‘Neutral’ Stanford Tree Hacks is the Moonshot Winner! Watch the creators and presenters Harry Zhang, Marissa Liu and Estelle Chung in the Video below. Know the carbon emissions of products you consider buying and buy offsets to Plant Trees.