How to make AI greener and more efficient

0
269
Oracle enhances customer experience platform with a B2B refresh

Source is ComputerWeekly.com

Wirth Research, an engineering company that specialises in computational fluid dynamics, has become increasingly concerned with environmental sustainability.

It initially focused on the design of racing cars, allowing clients to replace expensive wind tunnel work with computerised modelling, but in recent years it has designed equipment that reduces the aerodynamic drag of lorries, and a device which reduces cold air escaping from open-fronted supermarket fridges, cutting energy use by a quarter.

The Bicester-based company also wanted to reduce the energy used by its detailed computerised modelling, which for car aerodynamics simulates around half a billion tiny cells of air. It had already adjusted the resolution of cells within each model, with a finer sub-millimetre mesh used near sharp edges.

Then, during the pandemic when it realised staff could work effectively from home, Wirth moved its computing from its own site to a renewable energy-powered datacentre in Iceland run by Verne Global. The new hardware has cut energy requirements by three-quarters and the power used is carbon neutral.

Engineering manager Rob Rowsell says that the total cost of the new equipment spread over several years, plus use of the Icelandic facility and connectivity, amounts to less than the old UK power bill. On top of that, as it plans to continue with hybrid working, the company has moved to smaller offices in a eco-friendly building.

Wirth wants to make its computing processes still more efficient. It can already halt iterations of virtual models when they stabilise rather than running them a fixed number of times, but it is looking at how it can use artificial intelligence (AI) trained on previous work to use a handful of iterations to predict a stable version of a model that would normally take much longer to reach.

The prediction would not need to be entirely accurate as the company would then carry out a few more iterations to check the model was stable. “You would end up being able to do 15 or 20 iterations rather than 100,” says Rowsell.

There is much potential to use AI to tackle climate change, says Peter van der Putten, director of decisioning and AI solutions at Massachusetts-based software provider Pegasystems and an assistant professor at Leiden University in the Netherlands.

But in recent years, AI has increasingly meant using deep learning models that require large amounts of computing and electricity to run, such as OpenAI’s GPT3 language model, trained on almost 500 billion words and using 175 billion parameters.

“Until recently, it was fashionable to come up with yet another model which was bigger,” says van der Putten. But environmental considerations are highlighting the benefits of making AI more efficient, with rising electricity costs increasing economic justifications. “Both from a financial point of view as well as from a climate point of view, small is beautiful.”

Another reason is that simpler, more efficient models can produce better results. In 2000, van der Putten co-ran a challenge where participants tried to predict which customers of an insurance company would be interested in buying cover for caravans, based on dozens of variables on thousands of people.

This featured real-life noisy data, which can lead complex models astray. “You might start to see patterns where there are none. You start to overfit data,” says van der Putten. This problem occurs when training data is not quite the same as the data for which predictions are required – such as when they cover two different sets of people. Simpler models also work well when there are clear relationships or when there are only a few data points.

It can also be difficult and expensive to revise big models trained on vast amounts of data. For evolving situations, such as allocating work to a group of employees with lots of joiners and leavers, lighter “online learning” models designed to adapt quickly based on new information can be the best option.

Van der Putten says that as well as being cheaper and having less environmental impact, these models are also easier to interpret. There is also the option of using classic machine learning algorithms, such as support vector machines, used to classify items, which tend to be lighter as they were developed in times of much more limited computing power.

Van der Putten says that AI specialists divided into tribes favouring specific techniques from the late 1980s and early 1990s, but practitioners then learnt to use different approaches in different situations or combine them. “Getting back to more of a multi-disciplinary approach would be healthy” he says of things now, given the alternatives to big data-driven deep learning generally use much less computing power.

Got to start somewhere

One option is to give AI models a starting point or structure, according to Jon Crowcroft, professor of communication systems at Cambridge University and founder of Cambridge-based data discovery firm iKVA.

Language models used to include structural rules rather than being based on analysing billions of words, and similarly science-focused models can benefit from having relevant principles programmed in. This particularly applies when analysing language, videos or images, where volumes of data tend to be very high.

For example, an AI system could learn to identify coronavirus spike proteins more efficiently if it was given a sample spike shape. “Rather than just having zillions of images and someone labelling them, you have a ground truth model,” says Crowcroft.

He adds that this approach is appropriate when each result can have significant consequences, such as with medical images. It can need specialists to provide initial material, although this may not be a significant drawback if those setting up the model are experts anyway, as is likely to be the case for academic use. Such initial human input can cut the computing power required to develop an AI model significantly and makes it easier to explain the model.

Carrying out AI locally would mean much less data being sent across networks, saving power and money

It could also help to shift where AI works, as well as how. A federated machine learning model could involve genuinely smart meters analysing a customer’s electricity use and sending an occasional update of a resulting model to the supplier, as opposed to present-day meters that send usage data every few minutes.

“The electricity company cares about having a model of everyone’s use over time”, not what every customer is doing in near real-time, Crowcroft says.

Carrying out AI locally would mean much less data being sent across networks, saving power and money, and would offer greater privacy as detailed usage data would not leave the property. “You can flip the thing around,” adds Crowcroft. Such “edge learning” could work well for personal healthcare monitors, where privacy is particularly important.

Reducing energy required for AI

If a centralised deep learning model is required, there are ways to make it more efficient. London-based code optimisation specialist TurinTech reckons it can typically reduce the energy required to run an AI model by 40%. If a slightly less accurate fit is acceptable then much greater savings are possible, according to chief executive Leslie Kanthan.

In a similar fashion to overfitting a model to the particular group of people who make up training data, a model trained on past financial trading data cannot predict its future behaviour. A simpler model can provide good predictions, be much cheaper to develop and much faster to set up and change – a significant advantage in trading.

TurinTech’s optimiser uses a hybrid of deep learning and genetic or evolutionary algorithms to adapt a model based on new information, rather than needing to regenerate it from scratch. “It will try to bend the deep learning model to fit,” Kanthan says.

Harvey Lewis, an associate partner of Ernst and Young UK and chief data scientist of its tax practice, says that evolutionary algorithms and Bayesian statistical methods are useful in making deep learning more efficient. However, it is common to take a brute force approach to tuning parameters in models, running through vast numbers of combinations to see what works, which for billions of parameters “is going to be pretty computationally expensive”.

The costs of such work can be reduced by using specialist hardware, Lewis says. Graphics processing units, which are designed to perform calculations rapidly to generate images, are better than general-purpose personal computers. Field programmable gate arrays which can be configured by users and tensor processing units designed by Google specifically for AI are yet more efficient, and quantum computing is set to go even further.

But Lewis says that it makes sense first to ask whether complex AI is actually required. Deep learning models are good at analysing large volumes of consistent data. “They are excellent at performing the narrow task for which they have been trained,” he says. But in a lot of cases there are simpler, cheaper options which have less environmental impact.

Lewis likes to find a baseline, the simplest AI model that can generate a reasonable answer. “Once you’ve got that, do you need to take it further, or does it provide what you need?” he says. As well as saving money, electricity and emissions, simpler models such as decision trees are easier to understand and explain, a useful feature in areas such as taxation that need to be open to checking and auditing.

He adds that it is often beneficial to combine human intelligence with the artificial kind. This can include manual checks for basic data quality problems, such as whether fields marked as dates are recognisable as these, before automated work starts.

It is then often more efficient to divide processes between machines and humans, with software doing high-volume sorting such as spotting pictures with dogs, then people making the more challenging judgements such as classifying the breed. “Bringing a human into the loop is a way to help performance and make it much more sustainable,” Lewis says.

Source is ComputerWeekly.com

Vorig artikelYoung people are leaving tech because of bad culture
Volgend artikelPublic sector IT and communications spend down 4% last year