More than A$17 billion worth of crops grown in Australia annually is attributed to agricultural pesticides. That’s a staggering 68% of the A$26 billion industry, according to a recent Deloitte report commissioned by CropLife Australia. So should we all pat ourselves on the back and eat up?
Most of us want cheap, perfect-looking produce and farmers want to make a decent living. Agricultural pesticides have undoubtedly reduced food loss and helped farmers provide the unblemished produce we have grown so used to.
But pesticides also represent a significant source of risk for human and wildlife health, and pollution into our waterways. Should we be concerned about these “costs”, and how do we account for them?
What are the costs of pesticides?
Pesticides (insecticides, herbicides, and fungicides) are applied over large areas in agriculture and urban settings. Their use represents an important source of diffuse chemical pollution that is difficult to monitor and difficult to control.
The overuse and reliance on pesticides has resulted in weeds and insects developing resistance to insecticides and herbicides. This results in excessive, ever-increasing pesticide use in an attempt to get on top of the problem.
For example, in the early 90s the overuse of insecticides resulted in resistant cotton bollworm (a serious moth pest of cotton), which nearly brought the cotton industry to its knees. New technology — genetically modified cotton expressing a bacterial toxin that kills only moths including the bollworm — has been a saviour to the industry and insecticide use has reduced by 87%.
Australia also has the worst weed resistance problem in the world, and many herbicides are no longer an option for control. Producing crops at a profit may be at risk, and the only way to get on top of the problem is likely to be by non-chemical means.
Pests developing resistance to pesticides isn’t the only problem. The use of “broad-spectrum” insecticides also wipes out all the good insects — the ones that eat the pests munching away at crops. Consequently, other pest insects that escaped the initial spray are able to grow large populations unchecked.
Unfortunately broad-spectrum pesticides are some of the cheapest chemicals in Australia costing only A$1.50 per hectare to apply in grain crops, making them an obvious choice for many farmers. These issues in themselves are challenging to manage not to mention the cost to human health and wildlife.
Insects are animals with neurological systems, and many insecticides, particularly organophosphates — a widely used class of insecticides — are neurotoxins to insects and to humans. Organophosphates are still widely used in agriculture in Australia even though many have been banned in the EU, and banned or restricted in the USA. Rarely do we ever measure these costs.
What are the alternatives?
The challenge is to reduce the risk from excessive pesticide exposure while maintaining and increasing the level of crop productivity. What are our alternatives?
There is an extensive range of policy instruments used by many countries to address human and ecosystem health concerns and pesticide pollution of water and air. These include regulation; payments to encourage lower use and more accurate application; pesticide taxes to encourage greater use efficiency by farmers; and advice and information for farmers on “best practice”.
For example, in 2008 the French government launched “EcoPhyto Plan” with a goal to reduce the use of pesticides and plant protection products by 50% by 2018, with an annual budget of €41 million (A$61 million).
In 2009 the EU adopted “Integrated Pest Management”: legislation to achieve sustainable use of pesticides, and prioritise non-chemical methods. The legislation takes effect in 2014.
This strategy will include a range of alternative management strategies to pesticides that can help control pests.
For pest insects, we can grow new crop varieties more tolerant to pest damage. We can manage weeds in fields and around field edges. We can conserve insect predators such as spiders and ladybirds. And we can selectively use insecticides that leave predators unharmed.
Research has also shown that native vegetation on farms can support these insect predators and native fauna. Managing vegetation to promote beneficial insects is known as “pest suppressive landscapes”, which could be a part of integrated pest management.
Another method may be crop rotation that produces “biofumigation” activity, such as mustards which produce a compound that inhibits fungal growth. These strategies can reduce soil-borne pathogens and break the disease cycle.
Where next for Australia?
If we compare pesticide sales and crop production in Australia we find that both increased from the early 1990s to early 2000s.
But for many OECD countries we now find that crop production has been decoupled from growth in pesticides. Instead, crop production has been boosted by other factors including education and training, payments for beneficial pest management, pesticide taxes, new pesticide products that can be used in smaller doses, and the expansion of organic farming.
In Australia there is actually very little data on pesticide use and environmental impact. This makes it difficult to judge how Australia is tracking against other countries, and how our flora and fauna are responding with continued exposure to these toxins.
Many groups and public lead alliances have expressed serious concerns about the way pesticides are regulated in Australia and about the implications for human health and the environment; several dozen pesticides banned in Europe are currently registered and used commonly in Australia.
Protecting crops against damage from weeds, insect pests and disease is an ongoing challenge. Integrated approaches, where chemical control is but one option — not the only option, and support for innovation from science, industry and farmers — will see us tackle these challenges.
Greater support for the development and registration of “softer chemicals” that are less toxic to the farm workers, and the environment, is needed. Australian farming is one of our most trusted industries precisely because we take steps to protect our people and our environment. We can’t get complacent if we’re to maintain that trust.
While Hong Kong has just reported its first case of the deadly H7N9 bird flu indicating that the virus may be spreading across China, Australia is reporting an egg shortage over Christmas as a result of the recent H7N2 cases in NSW. So how does the virus keep reinventing itself to cause issues across the world?
As over 70 per cent of emerging infectious diseases in people originate in animals, whenever we hear of a new virus outbreak we jump to find the source.
That’s not to vilify the animal species responsible, but to enable scientists to characterise the virus, track its path, assess its level of virulence and its potential impact on animal and human populations. While some recent viruses such as SARS and MERS have been tracked to bats, in the case of avian influenza in people, the source is birds.
Finding the source of influenza
As well as “bird flu” in the past there have also been reports of “swine flu”. In fact both these flu viruses belong to a group known as influenza A, and all influenza A viruses originally come from wild water fowl.
These complex viruses have evolved over time to become infectious to domestic birds such as farmed and back-yard poultry, pigs, horses, other domestic and wild animals and of course people. Cross-species transmissions can occur from time to time.
Viruses that infect more than one species frequently have natural hosts in which they replicate but do not cause obvious disease. The pathogen and host exist in harmony with each other and examples include Hendra, Nipah and SARS viruses in bats, Hanta viruses in rodents and influenza viruses in wild water birds.
On the whole, naturally occurring avian influenza (AI) viruses do not cause disease in wild bird populations. However, if wild water fowl are shedding virus and come in contact with domestic poultry, their food or water, either directly or via their excretions, AI can enter a poultry farm.
Once on a farm, the virus can be transmitted and maintained in the poultry in low pathogenic form, or certain strains can mutate to become highly pathogenic avian influenza (HPAI) in the new host with a high fatality rate.
In the case of farmed chickens, the close contact between these birds can lead to rapid transmission and in some countries infection has jumped from the poultry to other species such as pigs and humans.
Influenza virus evolution
There are a range of different influenza virus subtypes differentiated by the external proteins of the virus: haemagglutinin (H) and neuraminidase (N). It is generally recognised there are 16 different H types and 9 different N types.
Only some viruses of the H7 and H5 subtypes progress to be highly pathogenic in poultry through the process of mutation. Other H types may cause low-level disease but do not show the highly pathogenic mutations that can occur with H7 and H5 strains.
Avian influenza is an RNA virus with eight segments to its genome which makes it prone to re-assortment. When two or more influenza strains infect a host the genetic material can mix thereby producing a new strain or genotype. These genotypes can be tracked over time and the lineage identified for each of the genomic segments.
The major H7 virus lineages can be traced to either one of Europe and Asia (Eurasia), Australia, or Nth American origins. On this basis, gene sequencing of virus from an influenza outbreak can be used to determine whether it is likely to be an exotic strain newly introduced from another region, or derived from viruses already circulating in the local environment.
The Avian Influenza situation in Australia
While Australian water fowl remain predominantly local to our continent, there are many wild migratory birds such as shore birds and waders that travel across the world to share Australia’s waterways. A few of these migratory birds could potentially infect local wild water fowl.
The devastating H5N1 highly pathogenic avian influenza strain has not ever been detected in either Australian wild or domesticated birds. All previous highly pathogenic avian influenza outbreaks in Australian poultry have been caused by H7 viruses.
Low pathogenic viruses with an H7 haemagglutinin similar to that found in the current H7N2 outbreak and the earlier H7N7 outbreak in NSW have been detected in past unrelated samples from Australian wild water fowl.
Genetic tracking gives support to the belief that outbreaks such as the October 2013 H7N2 are the result of transmission of a low pathogenic virus from a wild bird reservoir to the poultry farm, where it then turned highly pathogenic as it spread among the farmed chickens.
Both the 2012 H7N7 and 2013 H7N2 are of Australian H7 lineage which has been circulating naturally here for many years.
Predictive genetic analysis
Genetic markers have been identified on H5 and H7 viruses that are associated with their potential to cause disease in people. The H7N9 virus in China in February 2013, though a low pathogenic avian virus, has certain genetic markers that are believed to be associated with its being more transmissible to and pathogenic in mammalian hosts.
Unlike the Chinese H7N9, the Australian H7N2 and H7N7 strains are more typical avian influenza A viruses that do not contain the same genetic markers that are a concern for disease in people.
The importance of biosecurity
Avian influenza will remain prevalent around the world so long as there are migratory birds. Biosecurity measures can mitigate the risk but whilst poultry, their food or water remain in potential contact with wild birds there remains a low possibility of the poultry becoming infected.
Biosecurity at the farm level is therefore vitally important to mitigate the risk of AI infection and biosecurity precautions to prevent disease outbreaks should be an everyday practice for all bird owners, whether large scale or back-yard poultry farmers.
Six out of ten Australians don’t eat enough fibre, and even more don’t get the right combination of fibres.
Eating dietary fibre – food components (mostly derived from plants) that resist human digestive enzymes – is associated with improved digestive health. High fibre intakes have also been linked to reduced risk of several serious chronic diseases, including bowel cancer.
In Australia, we have a fibre paradox: even though our average fibre consumption has increased over the last 20 years and is much higher than in the United States and the United Kingdom, our bowel cancer rates haven’t dropped.
This is probably because we’re eating a lot of insoluble fibre (also known as roughage) rather than a combination of fibres that includes fermentable fibres, which are important for gut health.
The different types of fibre
Eating a combination of different fibres addresses different health needs. The NHMRC recommends adults eat between 25 and 30 grams of dietary fibre each day.
For convenience, dietary fibre can be broadly divided into types:
- Insoluble fibres or roughage promote regular bowel movements. Sources of insoluble fibre include wheat bran and high-fibre cereals, brown rice, and wholemeal breads.
- Soluble fibres slow digestion, lower plasma cholesterol levels, and even out glucose uptake to the blood. Sources of soluble fibre include oats, barley, fruits, and vegetables.
- Resistant starches contribute to health by feeding good bacteria in the large bowel, which improves its function and reduces risk of disease. Sources of resistant starch include legumes (lentils and beans), cold cooked potatoes or pasta, firm bananas, and whole grains.
Resistant starches are perhaps the least well known of the different types of fibre, but they may be the most important for human health.
International studies find a stronger association with reduced bowel cancer risk for starch consumption than total dietary fibre.
Resistant starch provides a likely mechanism for this association because it promotes gut health through the short-chain fatty acids produced by good bacteria. The short-chain fatty acid butyrate is the preferred energy source for cells that line the large bowel.
If we don’t eat enough resistant starch, these good bacteria in our large bowel get hungry and feed on other things including protein, releasing potentially damaging products such as phenols (digestion products of aromatic amino acids) instead of beneficial short-chain fatty acids.
Eating more resistant starch protects the bowel from the damage associated with having a hungry microbiome. It can also prevent DNA-damage to colon cells; such damage is a prerequisite for bowel cancer.
Consuming at least 20 grams a day of resistant starch is thought to promote optimal bowel health. This is almost four times more than a typical western diet provides; it’s the equivalent to eating three cups of cooked lentils.
In the Australian diet, resistant starch comes mostly from legumes (beans), whole grains, and sometimes from cooked and cooled starches in dishes such as potato salad.
This is in stark contrast with other societies, such as India where legumes are a significant part of the diet, or South Africa where maize porridge is a staple often eaten cold.
Cooling starches allows the long chains of sugars that make them up to cross-link, which makes them resistant to digestion in the small intestine. This, in turn, makes them available to good bacteria in the large bowel.
A healthy digestive system is critical for good health, and fibre promotes digestive health. While most of us feel uncomfortable talking about our bowel movements, having an understanding of what is optimal in this department can help you adjust the amount of fibre in your diet.
There’s a wide array of bowel habits in the normal population, but many health experts agree that using tools such as the Bristol stool chart can help people understand what bowel movements are best. As usual with medical advice, if you’re concerned you should start a conversation with your doctor.
A high-fibre diet should give you a score of four or five on the Bristol stool chart, and less than four could indicate that you need more fibre in your diet. If you increase your fibre intake, you will also need to drink more fluids because fibre absorbs water.
But gut health is not as simple as just ensuring regular bowel motions. Australians are, on average, eating sufficient insoluble fibre, but not enough resistant starch, which promotes gut health by feeding good bacteria in the large bowel.
Resistant starches are fermentable carbohydrates, so you might wonder if eating more of them will increase flatulence. Farting is normal and the average number of emissions per day is twelve for men and seven for women, although that varies for both sexes from two to 30 emissions.
Nutritional trials have shown high-fibre intakes of up to 40 grams daily, including fermentable carbohydrates, don’t lead to significant differences in bloating, gas or discomfort, as measured by the Gastrointestinal Quality of Life Index.
Nonetheless, it’s sensible to increase your fibre intake over weeks and drink adequate water. You might change to a high-fibre breakfast cereal one week, change to a wholegrain bread the next, and gradually introduce more legumes over several weeks.
A slow increase will allow you and your good bacteria to adjust to the high-fibre diet, so that you aren’t surprised by changes in your bowel habits. The composition of bacteria in your large bowel will adjust to suit a high-fibre diet, and over weeks these changes will help you process more fibre.
Getting enough fibre is important, but getting a combination of fibre is imperative for good digestive health.
Most people know that eating insoluble fibre improves regular bowel movements, but the benefits of soluble fibre in slowing glucose release and resistant starch in promoting beneficial bacteria are less well known. Including a variety of fibres in your diet will ensure you get the health benefits of all of them.
By Leon Rotstayn, Senior Principal Research Scientist, Marine and Atmospheric Research
Climate scientists have established a convincing case for the link between increasing concentrations of greenhouse gases and observed warming of the Earth since the 19th century. The Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC) stated, “Human influence on the climate system is clear.
“This is evident from the increasing greenhouse gas concentrations in the atmosphere, positive radiative forcing, observed warming, and understanding of the climate system.” It also concludes that “it is extremely likely that human influence has been the dominant cause of the observed warming since the mid-20th century.”
Aside from carbon dioxide, another human influence on climate comes from aerosols, which exert a cooling effect. Aerosols (that is, atmospheric particles, not propellants used in spray cans) have masked some of the warming that is caused by increasing greenhouse gases.
Without the masking effect of aerosols, global temperatures would have increased more than they have since the 19th century.
I recently led a study that examined the effects of declining aerosols in 21st century climate projections. We found that global warming is likely to accelerate in the next few decades, if the cooling influence of human-generated aerosols declines as predicted.
What are aerosols?
Aerosols are atmospheric particles, which have an overall cooling influence on climate by reflecting sunlight back into space. They also have an indirect effect by making clouds brighter; this further increases the reflection of sunlight back into space. Sources of human-generated aerosols include the use of fossil fuels and burning of vegetation.
Although human sources of aerosols are broadly similar to those of carbon dioxide, there is an important difference.
Emissions of carbon dioxide are intrinsically linked to the energy content of the fuel, so increasing energy use leads to increasing emissions of carbon dioxide. But aerosols are produced as a by-product of the combustion process, and in many cases there are technologies that can reduce emissions of aerosols (or gases that subsequently form aerosols in the atmosphere).
Because aerosols have harmful effects on human health and the environment, such technologies have been deployed in the industrialised world for some time. As long ago as the mid-1970s, emissions of sulfur dioxide from coal-fired power stations started to decline in Europe and North America, due to controls that were introduced to combat acid rain, which was destroying forests. These controls also had the effect of reducing sulfate, an aerosol that exerts cooling effects on climate.
More recently, authorities in China recognised the problems caused by aerosol pollution, and began to introduce emission controls, similar to those first seen in the industrialised world in the 1970s. Observations in China show that aerosol pollution peaked in 2006, and has started to decrease since then, despite continuing rapid economic growth. However, high levels of aerosol pollution are still causing serious concerns about health effects in China, suggesting that there is a strong need to further reduce emissions.
In many other developing countries, aerosol emissions are still increasing.
Over the last several years, climate modellers from many research centres carried out new climate projections, which provided input to the IPCC Fifth Assessment Report. These climate projections are driven by a range of scenarios (or “pathways”), which have different assumptions about changing levels of greenhouse gases.
A common feature of all the pathways is that aerosol emissions decline sharply during the 21st century. The projected decline is based on the assumption that once wealth per capita reaches a certain level in each country, there will be an increased focus on cleaner, healthier air.
In other words, it is assumed that during the 21st century the developing world will follow a path similar to the industrialised world, where aerosol emissions have declined in recent decades.
What happens to climate if aerosols decline?
Whereas increasing aerosols have masked global warming in the past, projected declines in aerosol emissions would unmask the warming effects of increasing greenhouse gases.
We are currently going through a transition. Until recently, aerosols have been acting like a “handbrake” on global warming. Over the next few decades, the decline of aerosols is expected to accelerate global warming, adding to the effects of increasing greenhouse gases.
Results from CSIRO climate modelling suggest that the extra warming effect from a decline in aerosols could be about 1 degree by the end of the century. But the size of this effect is very uncertain, so we compared the results from the CSIRO model with those from a range of international models.
We found that models with a stronger aerosol cooling effect in the 20th century tend to simulate greater warming in the 21st century. In other words, climate models with a stronger aerosol masking effect also have a stronger unmasking effect as aerosols decline.
Understanding aerosol effects is one of the biggest challenges for climate scientists. Aerosol processes are highly complex, and the magnitude of aerosol cooling effects in today’s climate is uncertain.
Every aerosol plume contains a mind-boggling soup of different chemical species; some of these (most notably black carbon) actually exert warming effects on climate, partly offsetting the cooling effects of other species such as sulfate. It is also unclear whether aerosols will really decline as rapidly as assumed in the projections.
Aerosols present an intriguing policy challenge. Concerns about toxic effects of aerosols on health and the environment provide strong reasons to reduce their emissions. But a uniform reduction in aerosol emissions is expected to accelerate global warming.
Based on this research, scientists have suggested that selectively reducing black carbon emissions is a possible option for mitigating global warming that will also have important health benefits.
Leon Rotstayn receives funding from the Australian Government Department of the Environment through the Australian Climate Change Science Programme.
More climate news on our Climate Response blog.
By Michael Brünig, Deputy Chief, Computational Informatics
There isn’t a radio-control handset in sight as a nimble robot briskly weaves itself in and out of the confined tunnels of an underground mine.
Powered by ultra-intelligent sensors, the robot intuitively moves and reacts to the changing conditions of the terrain, entering areas unfit for human testing. As it does so, the robot transmits a detailed 3D map of the entire location to the other side of the world.
While this might read like a scenario from a George Orwell novel, it is actually a reasonable step into the not-so-distant future of the next generation of robots.
A recent report released by the McKinsey Institute predicts the potential economic contribution of new technologies such as advanced robotics, mobile internet and 3D printing are expected to return between US$14 trillion and US$33 trillion globally per year by 2025.
Technology advisory firm Gartner also recently released a report predicting the “smart machine era” to be the most disruptive in the history of IT. This trend includes the proliferation of contextually aware, intelligent personal assistants, smart advisers, advanced global industrial systems and the public availability of early examples of autonomous vehicles.
If the global technology industry and governments are to reap the productivity and economical benefits from this new wave of robotics they need to act now to identify simple yet innovative ways to disrupt their current workflows.
The automotive industry is already embracing this movement by discovering a market for driver assistance systems that includes parking assistance, autonomous driving in “stop and go” traffic and emergency braking.
In August 2013, Mercedes-Benz demonstrated how their “self-driving S Class” model could drive the 100-kilometre route from Mannheim to Pforzheim in Germany. (Exactly 125 years earlier, Bertha Benz drove that route in the first ever automobile, which was invented by her husband Karl Benz.)
The car they used for the experiment looked entirely like a production car and used most of the standard sensors on board, relying on vision and radar to complete the task. Similar to other autonomous cars, it also used a crucial extra piece of information to make the task feasible – it had access to a detailed 3D digital map to accurately localise itself in the environment.
When implemented on scale, these autonomous vehicles have the potential to significantly benefit governments by reducing the number of accidents caused by human error as well as easing traffic congestion as there will no longer be the need to implement tailgating laws enforcing cars to maintain large gaps in between each other.
In these examples, the task (localisation, navigation, obstacle avoidance) is either constrained enough to be solvable or can be solved with the provision of extra information. However, there is a third category, where humans and autonomous systems augment each other to solve tasks.
This can be highly effective but requires a human remote operator or depending on real time constraints, a human on stand-by.
The question arises: how can we build a robot that can navigate complex and dynamic environments without 3D maps as prior information, while keeping the cost and complexity of the device to a minimum?
Using as few sensors as possible, a robot needs to be able to get a consistent picture of its environment and its surroundings to enable it to respond to changing and unknown conditions.
This is the same question that stood before us at the dawn of robotics research and was addressed in the 1980s and 1990s to deal with spatial uncertainty. However, the decreasing cost of sensors, the increasing computing power of embedded systems and the ability to provide 3D maps, has reduced the importance of answering this key research question.
In an attempt to refocus on this central question, we – researchers at the Autonomous Systems Laboratory at CSIRO – tried to stretch the limits of what’s possible with a single sensor: in this case, a laser scanner.
In 2007, we took a vehicle equipped with laser scanners facing to the left and to the right and asked if it was possible to create a 2D map of the surroundings and to localise the vehicle to that same map without using GPS, inertial systems or digital maps.
The result was the development of our now commercialised Zebedee technology – a handheld 3D mapping system incorporates a laser scanner that sways on a spring to capture millions of detailed measurements of a site as fast as an operator can walk through it.
While the system does add a simple inertial measurement unit which helps to track the position of the sensor in space and supports the alignment of sensor readings, the overall configuration still maximises information flow from a very simple and low cost setup.
It achieves this by moving the smarts away from the sensor and into the software to compute a continuous trajectory of the sensor, specifying its position and orientation at any time and taking its actual acquisition speed into account to precisely compute a 3D point cloud.
The crucial step of bringing the technology back to the robot still has to be completed. Imagine what is possible when you remove the barrier of using an autonomous vehicle to enter unknown environments (or actively collaborating with humans) by equipping robots with such mobile 3D mapping technologies. They can be significantly smaller and cheaper while still being robust in terms of localisation and mapping accuracy.
From laboratory to factory floor
A specific area of interest for this robust mapping and localisation is the manufacturing sector where non-static environments are becoming more and more common, such as the aviation industry. Cost and complexity for each device has to be kept to a minimum to meet these industry needs.
With a trend towards more agile manufacturing setups, the technology enables lightweight robots that are able to navigate safely and quickly through unstructured and dynamic environments like conventional manufacturing workplaces. These fully autonomous robots have the potential to increase productivity in the production line by reducing bottlenecks and performing unstructured tasks safely and quickly.
The pressure of growing increasing global competition means that if manufacturers do not find ways to adopt these technologies soon they run the risk of losing their business as competitors will soon be able to produce and distribute goods more efficiently and at less cost.
It is worth pushing the boundaries of what information can be extracted from very simple systems. New systems which implement this paradigm will be able to gain the benefits of unconstrained autonomous robots but this requires a change in the way we look at the production and manufacturing processes.
This article is an extension of a keynote presented at the robotics industry business development event RoboBusiness in Santa Clara, CA on October 25 2013.
By Gary Fitt, Director, Biosecurity Flagship
When the Department of Agriculture called a halt to imports of pop star Katy Perry’s latest album this month, they weren’t making a musical judgement. They were protecting Australia’s biosecurity.
Biosecurity is also a process – a set of linked science based protocols and procedures aimed at stopping unwanted pests and diseases from arriving in Australia, detecting and rapidly eradicating them if they do arrive, or (if they become established) trying to minimise their impact by using long-term management strategies.
Perry’s album included a paper impregnated with seeds so listeners could grow their own plants; while well-meaning, those seeds could pose a biosecurity risk. That’s exactly the kind of risk that gets investigated in efforts to keep pests out of our country.
“Biosecurity” was first used to describe preventive and quarantine measures to reduce the risk of invasive pests or diseases arriving at a specific location that could damage crops and livestock as well as the wider environment.
Today, biosecurity can encompass much more. It includes managing biological threats to our people, industries or environment. These may be from exotic (foreign) or endemic organisms but they can also extend to pandemic diseases and the threat of bioterrorism.
A challenging environment
Australia’s island status protects us from exotic pests and diseases to a certain extent, but we also have an enormous border to protect. International trade is increasing, and ships, planes and people are moving in increasing volumes across international and state borders. This means there is more pressure than ever on our biosecurity surveillance and response systems.
Australia has an enviable biosecurity and quarantine system. But there is no such thing as zero risk.
Invasive alien species remain one of the greatest threats to our biodiversity, and our agricultural productivity. The reality is that exotic organisms arrive in Australia regularly and sometimes become established. A critical element of biosecurity response is to prioritise which ones are of most concern and need rapid response.
Protecting our livestock industries, native wildlife, human health and the environment from exotic or emerging pests and diseases of animals is the realm of animal biosecurity.
Australia is fortunate to be free of many highly infectious animal diseases, such as foot and mouth disease, highly pathogenic forms of avian influenza, African swine fever and many others that have serious consequences in other countries. An outbreak of any of these diseases could significantly impact the productivity of our livestock industries, and make it very difficult to trade our agricultural products overseas, as well as result in significant social and economic costs.
The 2007 equine influenza (horse flu) outbreak in Australia was a “wake up call” of how disruptive exotic disease outbreaks can be and why vigilant biosecurity is so necessary.
Avian influenza has recently been detected on poultry farms in NSW (fortunately not the strains that can affect humans). The widespread culling that followed shows how much social and economic impact individual farmers might suffer if there are future disease outbreaks. Strict farm level biosecurity is becoming increasingly the norm across many industries.
Plant pests and diseases can significantly damage Australia’s productive plant industries. They reduce yields, lower the quality of food, increase production costs and make it difficult to sell our produce in international markets.
This is true across the massive expanses of our wheat production, and in the more intensive high-value production of horticulture, wine, cotton and sugar industries.
Plant pests and diseases may also be a huge threat to our natural environment: native forests, grasslands, and shrub lands.
Again, Australia is lucky to be free of many damaging pests prevalent elsewhere in the world. Citrus greening is a disease of citrus we definitely don’t want, whilst the varroa mite, for example, has devastated honeybee productivity and pollination success in every continent except Australia.
Fewer pest and disease problems mean lower production costs. Areas where rigorous biosecurity can deliver “pest freedom” gives Australian producers an enormous advantage in international markets and allows us to have safer and cheaper locally produced food.
Australia has an enormous shoreline and amazing biodiversity in our marine ecosystems. Marine biosecurity is dedicated to keeping these systems intact. It focuses on protecting aquaculture, ports and the environment from problems caused by invasive marine organisms. These can threaten marine infrastructure and ecosystems.
Australia is in the midst of major port expansion; new ports are being developed and international shipping is increasing dramatically. As a result the risks from marine invasive species are growing and our biosecurity response needs to grow as well.
Many emerging infectious diseases in livestock that also affect human health emerge from wild, native species. These so-called zoonotic diseases make up 70% of all the emerging diseases affecting human populations and include the likes of avian influenza, SARS and Hendra virus. The impacts of these diseases can be extremely severe and the need to manage livestock and human health risks in a unified way is also becoming much clearer.
“One Health” is a new way of looking at the connections between the environment, production animals and emerging threats to people. As with animal, plant and marine biosecurity, human biosecurity is about effective surveillance and response, being aware of the risks that are circulating the globe, having the tools to rapidly detect and diagnose them and the tools and systems to respond quickly.
Biosecurity is both a process and an outcome. A successful biosecurity system requires scientists, government, industry, and the community to cooperate. In the end it is a system of shared responsibility.
Australia is achieving this by working together across the continuum. We investigate risks offshore, focus on surveillance and detection at the border, and research effective response and management systems within our borders.
Robust emergency response arrangements are in place to manage outbreaks. But preventing pest, disease and weed incursions in the first place, through effective and smart surveillance, remains a national priority.
By Glenn Platt, Theme Leader, Local Energy Systems
Where would we be without electricity? Assuming that you own a fridge, there won’t be many points in your life when you aren’t making use of it.
But what do we mean when we talk about the electricity grid?
The grid itself is less of a physical location. Instead, it is a term typically used to describe the three main players involved in the supply of electricity; generators, distributors or transmitters, and retailers.
Let’s take a look at each in turn.
The three main players
It all starts with generation.
In Australia, primary sources of energy, such as the sun and wind, are increasingly being used to produce electricity. But, when Australians boiled the kettle to make tea this morning, approximately 90% of the electricity used was generated at a power station by burning coal or gas.
Next step, transmission.
When you flick the kettle’s switch, electricity travels along a conductor and straight to your appliance close to the speed of light. Although this occurs instantaneously, it is preceded by a sequence of events.
A transformer converts the electricity, produced at a generation plant, from low to high voltage to enable it to be transported efficiently on the transmission system, where it travels long distances from the power station to the suburbs we live in.
This part of the grid is very public. You’ve probably seen the poles and wires so many times now that you’ve stopped noticing them.
When the high voltage electricity arrives near the location where it is required, a substation transformer will change it to low voltage before carrying it along the distribution lines to individual consumers, who can then access it with the flick of a switch.
The third major player in the electricity grid is the retailer.
This is the middle man who buys wholesale electricity from generators and then sells it to you, sending you a bill every quarter.
National Electricity Market
For an example of where this all comes together, let’s look at the National Electricity Market (NEM), which operates the world’s longest interconnected power system.
As electricity supply must be closely matched to demand, the NEM uses sophisticated systems to send signals to generators, telling them how much energy to produce every few minutes.
Such control requires careful forecasting of the electricity demand, as well as the generation that will be available.
The job becomes even harder with generation from variable sources, like solar and wind, into the mix.
Once generated, electricity supply is traded much like shares on a stock market – the market operator indicates the demand for a particular time, and generators compete on price to meet that demand.
If demand exceeds supply, prices increase, and vice versa.
Grid losses and rising costs
The grid system is not without its challenges, not least of which is the enormous amount of wasted energy.
Consider this. A typical coal-fired power station loses (or wastes) almost 70% of the energy that goes into it, when converting the energy in coal to electricity, and up to a further 10% is lost during the transmission and distribution stage. An old-fashioned light bulb then loses 98% of this energy to make light.
So we only end up using about half a per cent of the total energy that we started off with. The rest is wasted.
Hot-water systems aren’t much better.
In this case, we burn coal to make hot water (or steam), convert this to electricity, and then convert the electricity back to hot water in the house. Along the way, we incur a bunch of losses, and only end up with about 27% of the energy that we started off with.
The real elephant in the room, however, is the issue of affordability.
Retail electricity prices have increased by roughly 60% since 2007. The causes are complex and differ by state, but the replacement and refurbishment of infrastructure (essentially, these are the poles and wires), compliance with reliability license conditions, and the building of new infrastructure to cater to peak demand played the largest role.
It is the latter that lies at the heart of the problem.
Peak demand typically occurs during heat waves and cold snaps, when three-quarters of all Australian households with air-conditioning turn on these appliances to get some respite from the elements.
Sounds sensible, right? Think again.
Ausgrid, a major electricity distributor, has estimated that $11 billion worth of network infrastructure in the NEM is used for just 100 hours per year (about one per cent) to meet periods of peak demand.
That’s like building an extra eight lanes on the Sydney Harbour Bridge just to cater for the worst peak hours of the year.
Is there a solution?
One proposal for combating affordability is to introduce different electricity prices for different time periods of usage. This is known as cost reflective pricing.
While not everyone may be able, or want, to make significant changes to the times when they use electricity, introducing such a scheme should at least allow consumers to make more informed decisions.
A major transformation in the Australian energy landscape, and one that poses a challenge for the traditional grid, is happening right outside your window.
Households are becoming energy generators, with one in ten Australian homes having installed solar panels. Consumers are now pushing power back the other way and, although this is an overwhelmingly positive thing, the amount of electricity that will be generated by these households can be as difficult to predict as the weather itself.
Reducing peak demand, allowing renewable energy, and maintaining reliability are all possible. But, we need to make some challenging decisions about how the grid should operate in the future, and how we’ll pay for it.
As part of the Future Grid Forum, CSIRO is working with the industry and government to ensure that when Australians boil the kettle in 2050, we are using electricity generated from the most cost-competitive, low-emission energy sources possible.
Restoring the productivity of our electricity system won’t reduce prices overnight, but it will lay the foundation for a cost-competitive energy system in the future.
This article is part of CSIRO’s Let’s Talk About Energy campaign which is a conversation about Australia’s energy future.
By Ian Opperman, Director, Digital Productivity and Services National Research Flagship
Australia has low unemployment, a high standard of living, fabulous beaches and great weather. We don’t have tight, large scale coupling between our creative types and our industry types. At least not in the ICT sector.
Many multinationals have their own creative centres, their own “dreamers”, and are typically very good at bringing the dreamers together with people and structures who implement their dreams (the “doers”), and finding the financial resources (the “dollars”) to make it all happen.
There are also some excellent examples of home-grown innovation getting this right, but only a few who have made it big – it is the exception rather than the rule.
We traditionally think of ourselves as a country of miners and farmers, of rugged individuals. Our folklore is jumbucks and billabongs rather than computer nerds to the rescue. Many still believe we live in a “dig it up and ship it” or a “shear it, shoot it, ship it” economy. In practice, we live in a services economy, much of which is digital. More than 70 per cent of our GDP is services based and this is growing.
Continuing improvements in communications networks and technology will profoundly impact many aspects of the economy through “tele” (tele-work, tele-health, tele-education). Innovation in digitally enabled services will dramatically change which services are delivered, improve quality of life, as well as drive productivity improvements.
The downside is that access to a globally connected world brings challenges in terms of the potential for a digital divide (digital “haves” and “have nots”), creation of unprecedented services competition (other countries selling to us), challenges for privacy and identity security, and increasing reliance on increasingly vulnerable systems. What we cannot do is sit back and let it just happen.
We have no shortage of ideas and clever people. Australia is considered a world leader in radio communications and in a number of high-impact science areas, but we also have an ageing population, many industry sectors with declining growth, a health sector that is now our biggest employer, and exposure to natural disasters including floods, droughts and bushfires. We can develop solutions to help ourselves, and services to export to the rest of the world. But the challenge always is achieving scale. The ICT finalists are inspiring examples of dreamers bringing their ideas forward. If you are a member of the doers, have dollars or can bring the data, strike up a conversation with our dreamers.
By Kevin Hennessy, Principal Research Scientist, Marine & Atmospheric Research
Recent fires in New South Wales highlight our current vulnerability, remind us about potential future risks and prompt us to think more strategically about risk management. Some key questions have come to the fore, such as:
Is climate change to blame for the NSW fires?
Bushfires are influenced by many factors including: warmer and drier conditions in preceding months, days with extreme heat, strong winds and low humidity, urban development patterns, fuel loads and management.
Together with accumulated fuel loads over the past few years, this provides conditions that increase fire risk. Other parts of Australia need to prepare for an active fire season.
While it’s almost impossible to attribute an individual extreme weather event to climate change, the risk of fire has increased in south-east Australia due to a warming and drying trend that is partly due to increases in greenhouse gases.
What is fire risk?
Fire is a natural part of the Australian landscape. Fire weather risk can be quantified using the Forest Fire Danger Index (FFDI).
Annual cumulative FFDI, which integrates daily fire weather across the year, increased significantly) at 16 of 38 Australian sites from 1973-2010. The number of significant increases is greatest in the southeast, while the largest trends occurred inland rather than near the coast. The largest increases in seasonal FFDI occurred during spring and autumn, while summer had the fewest significant trends.
This indicates a lengthened fire season.
Fire risk is different to fire weather risk, as fire risk is affected by other factors, such as vegetation and human behaviour, in addition to the weather.
What can we expect in the future?
Climate change over the coming decades is likely to significantly alter fire patterns, their impact and their management in Australia.
An increase in fire-weather risk is likely with warmer and drier conditions in southern and eastern Australia.
The rate of increase depends on whether global greenhouse gases follow a low or high emission scenario. Carbon dioxide emissions have been tracking the high scenario over the past decade.
The number of “extreme” fire danger days in south-east Australia generally increases 5-25% by 2020 for the low scenarios and 15-65% for the high scenarios. By 2050, the increases are generally 10-50% for the low scenarios and 100-300% for the high scenarios. This means more total fire ban days.
Fire danger periods are likely to be more prolonged, so the fire season will lengthen.
What should we do now?
Without adaptation, there will be increased losses associated with the projected increase in fire weather events.
Adaptation in the short-term can lead to greater preparedness, including many well established actions such as fire action plans, vegetation management and evacuations; while adaptation in the long-term can reduce the fire risk experienced by society, through actions such as appropriate building standards and planning regulations in fire-prone areas.
Kevin Hennessy receives funding from the Commonwealth Department of Environment.
Galaxies may look pretty and delicate, with their swirls of stars of many colours – but don’t be fooled. At the heart of every galaxy lies a supermassive black hole, including in our own Milky Way.
Black holes in some nearby galaxies contain ten billion times the mass of our sun in a volume a few times the size of our solar system. That’s a lot of mass in a very small space – not even light travels fast enough to escape a black hole’s gravity.
So how did they get that big? In the journal Science, we tested a commonly-held view that black holes become supermassive by merging with other black holes – and found the answer is not quite that simple.
Searching for gravitational waves
The answer may lie in a related question: when two galaxies collide to form a new galaxy, what happens to their black holes?
When galaxies collide, they form a new, bigger galaxy. The colliding galaxies’ black holes sink to the centre of this new galaxy and orbit each other, eventually combining to form a new, bigger black hole.
Black holes, as the name suggests, are very hard to observe. But orbiting black holes are the strongest emitters in the universe of an exotic form of energy called gravitational waves.
Gravitational waves are a prediction of Einstein’s General Theory of Relativity and are produced by very massive, compact objects changing speed or direction. This, in turn, causes the measured distances between objects to change.
For example, a gravitational wave passing through your computer screen will cause it to first stretch in one direction, then in a perpendicular direction, over and over again.
Fortunately for your laptop, but unfortunately for astronomers, gravitational waves are very weak. Gravitational waves from a pair of black holes in a nearby galaxy causes your screen size to change by one atomic nucleus over ten years.
While they’re not quite as extreme as black holes, pulsars are massive and compact enough to crush atoms into a sea of nuclei and electrons. They compress up to twice the mass of our sun into a volume the size of a large city.
So how do pulsars help? First, they rotate very quickly – some of them up to 700 times per second – and very predictably. They emit intense lighthouse-like beams of radio waves, which, when they sweep by the Earth, appear as regular “ticks” – see the video below.
So here’s the punchline: gravitational waves from pairs of black holes throughout the universe will disrupt the otherwise extremely regular ticks from pulsars in a way we can measure.
Our pulsar measurements
We found that the theory that black holes grew mainly by absorbing other black holes is not consistent with our data.
If the theory was right, gravitational waves would exist at a level that would cause the ticks to appear less regularly than our measurements. This means that black holes must have grown by other means, such as by consuming vast swathes of gas churned up during galaxy mergers.
The measurements span over ten years, and are some of the most precise in existence.
These data are being collected to eventually directly observe gravitational waves. In our work, however, we compared the data with gravitational wave predictions from various theories for how black holes grew.
Our work gives us great encouragement for the prospects for using pulsars to detect gravitational waves from black holes.
We are confident that gravitational waves are out there – galaxies, after all, do collide – and we have shown that we can measure pulsar ticks with sufficient accuracy to be able to detect gravitational waves in the near future.
In the meantime, we can even use the absence of gravitational waves to study elusive super-massive black holes.
By Peter Kambouris, Theme Leader, Agile Manufacturing
In 1977, George Lucas’ Star Wars introduced us to R2-D2, an intelligent bot that embodied modern assistive robotics: he could anticipate needs and perform a number of tasks with minimal instruction.
Almost 40 years later, assisted robotics offer game-changing advantages to manufacturing, and we’ve been canvassing Australian manufacturers to find out how robots can help them overcome current challenges.
Through a series of workshops held in Melbourne and Brisbane, we investigated what manufacturers want and where they envisage using robotics help to solve current problems.
Here, I present Australian manufacturing’s wish-list for robotics.
Robots to make the workplace safer
Australian manufacturers want to use robots to orient, lift and manipulate components so that workers can assemble them with greater precision in manufacturing lines.
The major objective is to reduce occupational health and safety events, compensation claims and lost time from worker absenteeism – though improved product quality is often a bonus.
Robots have established potential to take on jobs that people don’t want to do and really shouldn’t be doing, such as hazardous or highly repetitive tasks that make workers vulnerable to repetitive-stress injuries.
In 2011, Dulux Group installed a robotic system in a Victorian factory to assist workers handling heavy bags of combustible powder which posed a potential respiratory hazard for workers.
The robotic system eliminated occupational health and safety incidents and allowed nine workers to be redeployed from a potentially hazardous and uncomfortable working environment.
Robots to help increase productivity
Australian manufacturers want assistive robots as an active extra pair of hands to increase worker production output.
An industrial robot from automation technology company ABB Australia, installed in 2008 has increased productivity and turnover for Brisbane-based Drake Trailers, one of Australia’s largest manufacturers of low-loader and specialist large scale trailers.
The ABB robot helped Drake Trailers lower unit cost while achieving better finish and repeatability. The increase in productivity at the family-owned company translated into an annual turnover of A$45-50 million, while staff numbers doubled within five years.
The spray painting division of Melbourne-based Hilustre Coatings handles a wide range of products, including kitchen cabinets, display panels, shop fittings and automotive components.
The variation in size, shape and product diversity of the one million items processed each year meant that a robot had to be capable of quick and easy transition to cope with the requirements of the facility.
The robotic process installed allowed Hilustre to increase efficiency by about 40%, and reduce the amount of paint per run while increasing the quality and consistency. Lower wastage of paint and solvent has improved economy of the operation.
Improving product quality using robots
Assistive robotics offer lots of opportunity for improving the quality of manufactured products. Australian manufacturers are keen to see this happen through use of real time physical and virtual supervision to improve quality and reduce product rejection rates and waste:
- by reducing time required for inspection and certification through real time capture of quality and compliance data during manufacture
- by improving product quality by reducing misalignment and incorrect assembly of components.
Reducing set-up time and downtime
Assistive automation offers the opportunity to reduce set-up time through automated jig recalibration (to allow a manufacturing process that ability to produce different good), or via virtual capture and replay of processes to identify timesaving opportunities. Virtual manuals allow a rapid response, also reducing set-up time.
Similarly, virtual technologies make expert assistance readily available for problem solving and repairs in remote areas through a computer link, enabling reductions in downtime. They also offer the potential for a shift in business model through sale of virtual services for maintenance.
Ford have been using virtual imaging of vehicles and interiors for some time. The company now plans to simulate the full assembly line production process to improve quality and cut down costs in real world manufacturing facilities.
Ford uses camera technology to scan and digitise real-world manufacturing facilities to create ultra-realistic 3D virtual assembly environments.
Extending workforce participation for older workers
Faced with an ageing workforce, Australian manufacturers are keen to retain skilled employees by using augmentation technologies, such as tele-supervision and tele-operation, to allow employees safe and productive extended working lives, and to facilitate phased retirement programs.
For example, General Electric now uses tele-operated “spiderman” wall-climbing robots for quality inspection of their 90-metre-tall wind turbine towers.
Instead of an inspector standing in a field, stopping the turbine and photographing potential problem spots with a powerful telephoto lens, the robot carries a camera which enables it to take photos of the turbine blades and beam them back to its operator on the ground quickly and efficiently.
If you too would like assistive robotics in your manufacturing business, we’re seeking ongoing input from Australian manufacturers. Send me your thoughts at email@example.com.
By Simon Lucey, Principal Research Scientist & Research Project Leader
We know buying clothes and accessories online carries a certain level of risk – what if the delivered product doesn’t fit or looks ridiculous?
But thanks to research into augmented reality you could soon find yourself “trying on” your potential purchases in a “virtual changeroom”.
For some years, shoppers have been able to see photos and movies of shops’ inventory on the internet, helping them decide what to buy.
While pictures can tell a thousand words, they don’t tell us the whole story. For some items, it is not the absolute characteristics of the product that are important – such as “do I like that lipstick colour?” – but the relative characteristic of the product: “do I like that lipstick on my lips?”.
This affects customers’ buying decisions and overall satisfaction with the product.
Enter augmented reality, which allows a person to view a virtual item in the context of the real physical world.
Augmented reality and computer vision
The concept of augmented reality centres around augmenting or supplementing an actual image in real-time with a virtual object or effect, such as artificially adding a certain shade of lipstick to a person’s lips, or placing a virtual couch in their living room.
Machines, however, do not see in the same way as humans and animals.
To a machine a digital image is simply a vast array of numbers, which computer scientists refer to as pixels. Pixels only record the relative intensity of light and do not explicitly record anything about the type of object the light ray bounced off, or its geometric placement.
Fortunately for us, the human visual system is able to take these pixel measurements of light and rapidly infer such information. In fact, the human visual system is able to do this with such ease that people often mistakenly assume that this task should also be easy for machines.
The challenge of instilling machines with such an ability is known in computer science circles as computer vision. The inability of machines to “see” how humans “see”, is the fundamental barrier prohibiting the wider deployment of augmented reality in online retail and beyond.
Semantic 3D from 2D
But over the past few years computer vision algorithms have become increasingly proficient at detecting objects of interests (such as faces, pedestrians and vehicles). However, knowing which pixel in an image belongs to what object class is not enough if we want to virtually try on a product.
For such an application one also needs to know what part of the object a pixel belongs to (such as eyes, mouth, nose) and an estimate of the 3D camera position that captured the image. This detailed information is essential if one is to create a reasonable facsimile of how a virtual accessory or product would appear, and move in the real world.
The team of scientists and graduate students I lead at the CSIRO have been working on making this computer vision challenge a reality. In particular, we have been pioneering technology that can accurately locate landmarks on the face in 3D from just a sequence of 2D digital images.
Our recent research has taken advantage of the rapid increase in computation power on mobile smart devices along with the advancement of new machine learning methods for learning intrinsic redundancies for objects of interest (such as faces) from massive image datasets.
A recent advancement of particular note is our group’s ability to accurately estimate tens of thousands of points in 3D on a subject’s face using just a sequence of 2D digital images of the subject turning his or her head.
Previously, you would require special hardware (such as a depth sensor) to obtain this type of quality 3D scan, but with our new approach we can obtain similar quality scans using standard 2D cameras found on smartphones, tablets and laptops.
With this type of detailed 3D information we can now place a bevy of virtual objects suitable for retail – such as glasses, hats and makeup – on the face with remarkable realism.
This realism stems from the accuracy and density of the 3D information we are obtaining from the sequence of 2D digital images stemming from the smart device.
Our technology is disruptive in the sense that it now gives consumers the ability to try on merchandise and products online in a way not possible before without specialised hardware.
The commercial world is now waking up to the numerous possibilities for this technology.
CSIRO’s computer vision group has been working with these industry partners over the past two years, pioneering the development of real-time 3D facial landmark tracking software requiring only 2D video from any smart device or laptop.
Faces, however, are just the beginning of this story.
One can imagine a not-too-distant future where a person shopping for a sofa could walk into their living room and, using their smart device, virtually place a sofa anywhere in their living room, viewing it in different colours, styles and positions. With a tap of the device, they could buy the sofa, knowing whether it will fit in the room and match the rest of the furniture.
Advances in computer vision technologies will soon have the potential to not only tell us a thousand words through an image – but perhaps the whole novel as well.
By John Barnes, Titanium Technologies Leader
It’s not easy being a small business in the current manufacturing environment. The face of manufacturing is changing, and businesses are eager for technological advances that could give them a competitive advantage.
Maybe 3D printing can help.
But that’s not all 3D printing can do. From design to finished product, 3D printing can speed up and reduce costs in the manufacturing process. Let’s have a look.
The big advantage
Being an additive technology, 3D printing offers significant cost savings by using lower amounts of material than traditional manufacturing methods. When you are forced to cut away from a solid block, removing more material to achieve details increases cost.
Most machines that print in metal can also recycle unused powder (the “ink” of 3D printing).
There are many types of 3D printing technologies available today. They use varying technologies and energy sources to fuse a range of plastic and metal feedstocks, with the plastic printers having the broadest capability.
Plastic printers range from “personal” units sold for thousands of dollars, to complex manufacturing units costing in the hundreds of thousands.
While 3D printing technologies for metals are more uncommon at present, applications for both materials are expanding rapidly.
Design: The time a product spends in the design cycle can affect time to market significantly. Use of 3D printing has been estimated to reduce development time by up to 96%.
Using 3D printing, multiple iterations of product design variations can be explored simultaneously during the conceptual stage of design, without investing in the tools to make the product.
Car manufacturers have used 3D printed mock-ups of parts for years, from rear view mirrors to front end fascias.
Products can be created at a digital facility as they are needed, eliminating customs inspections and duties and minimising transportation costs.
Tools: 3D printing has been used very successfully to manufacture jigs, fixtures, gauges and shop tools quickly and inexpensively. These are the tools made to assemble more complex parts, and they can be expensive for manufacturers.
3D printer manufacturer Stratasys cites saving up to 90% on fabrication of fixtures, and one-year profit gain of from US$60,000 to 230,000.
A specialised example of this use of 3D printing is use in medicine by surgeons to create mock-ups of complicated surgeries, allowing them to practice before undertaking the surgery – for example, placement of drilled holes to install pins or screws with extreme precision.
Building in one: Many products need to be assembled from several parts. 3D printing enables manufacture of complex components which cannot be made using conventional methods, opening
the way to unitisation. This is the ability to combine two or more simple parts, prior to assembly, into one large complex component.
Costs are reduced because both the part count and time required for assembly are reduced. The unitisation process is similar to modular building practices, where pre-fabricated rooms complete with internal fixtures and fittings are assembled on site.
Specialisation: 3D printing has found a niche in production of customised and speciality products such as bio-medical prostheses and intricate lost-wax casting moulds for jewellery designs.
Finding a 3D printer
CSIRO has initiated the Australian Additive Manufacturing Network to facilitate collaboration between research organisations and industry. The purpose of the network is to make effective use of 3D printers and assist Australian manufacturing companies to compete globally.
There is a growing list of service bureaus who can print sample pieces, enabling trial of the technology without the need to invest in an actual 3D printer.
3D printing is unlikely to fully replace conventional manufacturing technologies – but thanks to the savings in time, risk, and materials it offers, future factories are just as likely to include 3D printers as conventional machines.
By Wilna Vosloo, Principal investigator FMD risk management.
Co-authored by Dr Juan Lubroth, Chief Veterinary Officer, Food and Agriculture Organization of the United Nations (FAO).
Australia has been free of foot and mouth disease since 1872, but it is still considered the most serious biosecurity threat to Australia’s agricultural industries. A widespread outbreak could cost the economy more than A$16 billion in the first 12 months.
Can foot and mouth disease actually be controlled? We think so, and we can learn a lot from how rinderpest – a highly virulent cattle plague – was eradicated.
A model for eradication
Even before the 2011 global declaration of freedom from rinderpest by the United Nations, many were asking what animal disease we could focus on next. Rinderpest was only the second virus to be globally eradicated, after smallpox.
For centuries, rinderpest devastated cattle and buffalo populations in Europe, Asia, the Middle East and Africa. It led to the downfall of armies, caused rural famine and created inestimable hardship.
The disease was not restricted by national borders: international coordination was fundamental for managing, controlling and finally ridding the planet of the virus.
Rinderpest’s reintroduction into Europe led to the establishment of a coordinating authority, the World Organisation for Animal Health, in 1924. When the Food and Agriculture Organisation of the United Nations was created in 1945, their charter to improve food and nutrition across the globe could only be realised by fighting devastating livestock diseases such as rinderpest and foot and mouth disease.
After decades of research and significant investment, rinderpest was isolated to only a handful of geographical areas by the late 1990s. The last outbreak was reported in 2001.
The rinderpest success story makes it clear there are three things needed if you are to eradicate an animal disease. You need political will, veterinary and local knowledge about how the disease spreads, and adequate tools (such as diagnostic assays and quality vaccines) for intervention.
These factors apply to many animal diseases, so control does not need to focus on one disease alone. Investment in improved veterinary services, for example, doesn’t just apply to disease elimination; it benefits animal health, community livelihoods and a country’s whole economy.
Can we control and eliminate foot and mouth disease?
As with rinderpest, tackling foot and mouth disease needs a global approach. Recent outbreaks in previously disease-free countries show that a piecemeal approach isn’t working: we must control the disease at source, in the places where the virus is endemic.
But disease-free countries also have to invest in their neighbours’ efforts to control and eliminate the disease. Australia is investing in neighbouring countries such as Indonesia and the Philippines, helping them with control strategies, laboratory facilities, and staff training through CSIRO and AusAID. Those countries are now free of foot and mouth disease.
Once a country is free of foot and mouth disease it can take advantage of lucrative trade with other disease-free countries. This trade isn’t just in animals, milk and meat, but also in genetics. But it takes millions of dollars to maintain freedom from foot and mouth disease, and to keep those market opportunities – worth billions – open.
Meanwhile, resource-poor countries are devastated by the effects of foot and mouth disease: reduced milk quantity and quality, weight loss and severe lameness. They are further crippled by unploughed fields, inability to transport produce to market for sale and loss of available food and quality nutrients for humans.
Unfortunately, countries where such debilitating diseases are circulating usually also have competing priorities in other sectors such as human health, education, governance and maintaining civil and political stability.
We know we have the tools, the diagnostic ability and enough knowledge about disease transmission to take on foot and mouth disease. So, globally, can we tackle the threat in endemic settings head on?
Improving practises at farm level is a good first step
There is much work to be done. But rather than focusing specifically on eradicating foot and mouth disease, countries where the disease exists could start by improving on-farm biosecurity generally.
They should improve production practises and hygiene, thereby increasing efficiency in milk and meat production, and improving the way they manage natural resources.
This can spread benefits to other areas: child and maternal care, nutrition and hygiene for the farmers and communities around the world. Boosting veterinary services and information sharing provides better health care and builds trust with trading partners.
If we took this approach, we would certainly reduce the effect of production and trade-related diseases, as well as a multitude of diseases humans can get from animals and food. Such a holistic strategy would also increase access to quality drugs and veterinary vaccines across the myriad of microbial threats, and improve the availability of high quality nutritious foods.
It is therefore not possible to focus on only one disease when embarking on disease eradication or control. We need a global approach – targeted and tailored to the prevailing social and economic conditions – against those diseases that affect livelihoods, human health and global-to-local trade opportunities.
With significant effort and investment, control and eradication are possible – not just of foot and mouth disease, but of all high-impact diseases that threaten today’s and tomorrow’s world.
By Dr Kenneth Lee
Director, CSIRO Wealth from Oceans Flagship
If anything good is to come from the devastation caused by the Deepwater Horizon oil spill in the Gulf of Mexico and Australia’s Montara oil and gas leak the year before, it is that we learn from our mistakes.
It’s now more than three years since the April 2010 explosion and oil spill at the BP drilling rig in the Gulf of Mexico, considered the world’s largest marine oil spill. Its effects are still being felt today – and there is still too much that we don’t yet know about its long-term costs.
That’s the challenge for scientists, the oil and gas industry and others: to develop better ways to safely tap into the wealth of our oceans to meet huge global demand for oil and gas, while still protecting our marine habitat and the communities who depend on it.
Witness to a disaster.
As a scientist involved in the development and application of oil spill counter-measures, back in 2010 I was asked by US Government agencies to assist in the oil spill response in the Gulf.
While working with a science team to monitor the effectiveness and potential environmental impacts of the clean-up, I witnessed how the spill affected the region’s environment and some of the surrounding local communities.
As a result of this firsthand experience, I was asked to serve on the US National Research Council Committee that was asked by Congress to examine the broader environmental, economic and social impacts of the Gulf spill.
We found there is a substantial gap in our understanding of the social and economic impacts of the oil spill on the multiple uses of the Gulf, such as for tourism and fisheries.
Our study also highlighted the limits in our knowledge about processes in the deep sea ecosystem, such as nutrient recycling and microbial degradation of oil, which could influence the level of productivity of the ocean.
Oil and gas in Australia
The US isn’t the only place we can learn from. Closer to home, four years after Australia’s worst oil and gas leak at the Montara station in the Timor Sea off the coast of Western Australia, the station resumed production in June this year.
There has been a transformation of its management culture, operational capabilities, safety processes and environmental systems. PTTEP Australasia (the operators of the Montara station) said that the changes were all validated by five independent reviews commissioned by the Australian Government.
Learning lessons from such experiences is more important than ever, given that oil and gas remain a critical part in Australia’s future energy needs.
Domestic demand for oil is expected to remain fairly constant through to 2035, with imports likely to triple. By 2035, Australia’s gas production is expected to quadruple and by the end of this decade, Australia may rival Qatar as the world’s largest exporter of liquid natural gas (LNG).
Despite the vast majority of our oil currently coming from offshore Australia, our nation’s deep sea remains relatively unexplored and there is significant potential for new resources to be found in deepwater frontier basins, such as in the Great Australian Bight.
In the last few months alone, 13 new offshore petroleum exploration permits have been granted for the Indian Ocean off the coast of Western Australia, as well as offshore from Tasmania.
Plugging gaps in our knowledge
Research collaborations with industry, government agencies and academia provide the essential scientific information required for ecosystem-based management decisions that allow society to benefit from its commercial activities, while protecting our marine habitat and the life within it.
It is vital that we look at impacts across the full life-cycle of offshore energy activities. For example, many people don’t realise that oil and gas production produces operational waste during its day-to-day operations and we want to know the long-term effects on the environment from this waste.
Of course, we need good processes to quickly assess and reduce environmental damage when oil spills occur. But we also need to take a bigger picture approach, so that we better understand the wider economic and social impacts of spills, and of oil and gas activities in general.
Our nation’s quest for energy is of economic, social and environmental significance and we need to ensure we have the best available information to inform decision-makers in industry, regulation and government.
Reliable socio-economic and environmental assessments are needed for better informed decisions on applications for offshore oil and gas operations. Such assessments can also serve as a baseline for the guidance of spill response operations and subsequent damage assessments, if a spill ever occurs.
By conducting whole-of-ecosystem studies that examine everything from the sea floor to the ecology of our ocean’s top predators, we can establish benchmarks so that, if environmental damage does occur, we know what the healthy ecosystem looked like and have the knowledge to eventually return it to its original state.
Strengthening ties with industry
Originally from Canada, I now call Perth home after my recent appointment as Director of CSIRO’s Wealth from Oceans National Research Flagship. Australia’s national science agency has a long-standing relationship with the oil and gas industry. The Wealth from Oceans Flagship intends to strengthen its collaboration with industry even further.
In the past, industry-research partnerships traditionally focused on improving production technologies and solutions. However, our focus for oil and gas research now includes environmental, economic and social factors, including risk assessments for regulatory approval, exploration, production, transportation, decommissioning, and emergency response to spills and mitigation.
In April this year, a team of scientists from CSIRO, the South Australian Research and Development Institute (SARDI) and the University of Tennessee returned from a research voyage to the Great Australian Bight. BP Developments Australia has been granted exploration rights in the Bight and is now collaborating with CSIRO, SARDI, University of Adelaide and Flinders University to conduct one of only a few whole-of-ecosystem studies ever undertaken in Australia.
This four-year, $20 million collaboration will examine the oceanography, ecology, and geochemistry of the Bight. It will also conduct socio-economic research on communities and businesses dependent on the Bight to ensure the future developments co-exist with the area’s environment, industries and the community.
Successful collaborations can benefit the bottom-line and the environment. In 2012 Petronas and CSIRO launched Pipeassure, a material used to protect pipelines against corrosion in harsh marine environments. This product, now commercially available, offers considerable benefits over conventional repair technologies and reduces production downtimes, benefiting both the operating company and the environment.
Balancing our need for offshore oil and gas while minimising the legacy of environmental impact on our marine life is a major challenge worldwide. My goal is for Australia to be a leader in setting standards for environmental protection, as well as in developing technology and training experts, ready to work in a globalised industry.