By James Davidson and Pamela Tyers
How do you eat your Easter chocolate? Do you suck it or chew it? Does your tongue smear the inside of your mouth as the chocolate melts, or does it get chomped by your back teeth then sent down your throat?
It’s true, some of us suck and some of us chew. Whichever process we use to break down food in our mouth, it affects the taste sensation.
Flavour is released through the movement and time taken for taste components to hit our taste buds. Those taste components include salt, sugar and fat. If we know how to place those tasty bits into foods so that they achieve maximum delicious flavour before we digest the food, we then know how to use less of the unhealthy ingredients because our inefficient chewing means that we don’t taste much of them anyway.
For example, bread would taste unappetising if too much salt was removed out of it, but science can help us understand how to remove some of the less healthy components out of foods while retaining their familiar, delicious taste.
Enter our new 3D dynamic virtual mouth – the world’s first – which is helping our researchers understand how foods break down in the mouth, as well as how the food components are transported around the mouth, and how we perceive flavours. Using a nifty technique called smooth particle hydrodynamics, we can model the chewing process on specific foods and gather valuable data about how components such as salt, sugar and fat are distributed and interact with our mouths at the microscopic level.
We’re using it to make food products with less salt, sugar and fat and incorporate more wholegrains, fibre and nutrients without affecting the taste.
It’s part of research that will help us understand how we can modify and develop particular food products with more efficient release of the flavour, aroma and taste of our everyday foods.
And it’s good news for all of us. Eighty percent of our daily diet is processed foods – think breakfast cereals, sliced meats, pasta, sauces, bread and more. So, creating healthier processed foods will help tackle widespread issues such as obesity and chronic lifestyle diseases.
In fact, our scientific and nutritional advice to government and industry has so far helped remove 2,200 tons of salt from the Australian food supply, and reduced our population’s salt consumption by 4 per cent.
Oh…and we’ve also used the virtual mouth to model just how we break down our Easter chocolate.
As the teeth crush the egg, the chocolate fractures and releases the caramel. The chocolate coating collapses further and the tongue moves to reposition the food between the teeth for the next chewing cycle. The caramel then pours out of the chocolate into the mouth cavity.
With this virtual mouth, variations to thickness of chocolate, chocolate texture, caramel viscosity, and sugar, salt and fat concentrations and locations can all be modified simply and quickly to test the effects on how the flavours are released.
Now that’s something to chew on. Happy Easter!
Media contact: James Davidson, 03 9545 2185, email@example.com
It’s often hard to understand what’s happening inside us, because the processes and phenomena that influence our bodies and impact our health are invisible.
Not being able to understand why we’re sick or why our body is acting the way it does can add to the stress and strain of illness.
But now, a new generation of movie makers are drawing back the curtain, revealing the hidden secrets of our marvellous biology and setting new standards for communicating biological science to the world.
Three spectacular new biomedical animations were premiered today during a red carpet event at Federation Square in Melbourne.
The molecular movies bring to life some very complex processes, researched by health researchers and detailed in scientific journals most of us never see. They showcase the work of VIZBIplus – Visualising the Future of Biomedicine, a project that is helping to make the invisible visible, so that unseen bodily processes are easier to understand, which will help us make better choices about our health and lifestyle.
With BAFTA and Emmy award winning biomedical animator Drew Berry as mentor, three talented scientific animators – Kate Patterson (Garvan Institute of Medical Research), Chris Hammang (CSIRO) and Maja Divjak (Walter and Eliza Hall Institute of Medical Research) – have created biomedically accurate animations, showing what actually happens in our bodies at the micro scale.
The animators used the same or similar technology as Dreamworks and Pixar Animation Studios, as well as video game creators, to paint mesmerising magnifications of our interior molecular landscapes. While fantastic, the animations are not fantasies. They are well-researched 3D representations of cutting-edge biomedical research.
Kate Patterson’s animation shows that cancer is not a single disease. She highlights the role of the tumour suppressor protein p53, known as ‘the guardian of the cell’, in the formation of many cancer types.
Chris Hammang’s animation describes how starch gets broken down in the gut. It is based on our very own health research about resistant starch, a type of dietary fibre found in foods like beans and legumes that protects against colorectal cancer – one of Australia’s biggest killers. Chris shows us the ‘why’ behind advice to change our dietary habits.
Maja Divjak’s animation highlights how diseases associated with inflammation, such as type 2 diabetes, are ‘lifestyle’ diseases that represent some of the greatest health threats of the 21st century.
With our current ‘YouTube generation’ opting to watch rather than read, biomedical animations will play a key role in revealing the mysteries of science. These videos will allow researchers to communicate the exciting and complex advances in medicine that can’t be seen by the naked eye.
Watch all the videos here and be among the first to see these amazing visualisations!
Picture this. It’s a beautiful autumn day in Melbourne. You’re about to embark on a walking tour to discover some of the city’s finest architecture. My name is Carrie and I’ll be your tour guide.
We begin at one of the city’s more stately buildings – the Shrine of Remembrance. This grand temple-like structure was built back in 1926 and is located right next to the Botanic Gardens. It’s a focus for the city’s ANZAC Day ceremonies each year and in this ANZAC Centenary commemorating the start of WW1.
This month our scientists at CSIRO brought high tech to history by mapping the Shrine using a 3D laser scanner, preserving it digitally with a tool called Zebedee.
As you can see, it’s very timely. Major renovations at the Shrine are underway to get ready for commemorations of the Gallipoli landing’s 100th anniversary in 2015. It’s part of the $45M ‘Galleries of Remembrance’ project.
The Shrine joins a select group of heritage sites mapped in 3D by the Zebedee scanner, along with Brisbane’s Fort Lytton, and even the Leaning Tower of Pisa.
Now I personally get quite excited about architectural drawings, but these 3D maps add detailed information for building managers and heritage experts by measuring the actual built spaces. Zebedee technology offers a new way for recording some of our priceless treasures.
Let me show you one of the interior images of the Shrine. These amazing ‘point clouds’ are created by a handheld laser scanner bouncing on a spring as the user walks through corridors, up stairs and round about. As long as it takes to walk through the building is about how long it takes to make the map. You can watch it online afterwards.
Despite their almost X-ray look, Zebedee can’t see through walls as the laser bounces off solid surfaces. But when you put all the data in one place you get a sliceable, zoomable, turnable map with architectural details like stairs, columns, voids, ceilings all measured to the nearest centimetre. But . . . no roof! That’s because our scientists are developing a flying laser scanner that scans rooftops from the air. Secret attics may be secret no longer.
That concludes our tour for today. If you’d like to take home your very own Zebedee souvenir, head to our website.
By Ali Green
Almost one in four older Australians are affected by chronic health conditions, and close to 1.2 million currently suffer from more than one. Given our ageing population, this number is set to increase significantly by 2030, adding more pressure to our health system.
Life for a chronic disease sufferer with complex conditions like diabetes, heart or lung disease typically means two to three hospital stays per year, on top of multiple visits to the GP for regular health checks.
In Australia’s largest clinical telehealth trial, we’ve equipped a group of elderly patients with broadband-enabled home monitoring systems to help them manage and monitor their conditions from home.
Patients can use the machines to measure their blood pressure, blood sugar, ECG (to detect heart abnormalities through electrical signals), lung capacity, body weight and temperature in a process that generally takes around 10-20 minutes.
The monitoring system’s large screen helps guide patients through the different procedures, and the data is sent off to a secure website where it becomes immediately available to a care team including the patient’s nurse and doctor. Daily stats are checked regularly by a specialist nurse who can assist the patient via telephone if there are any changes in their regular patterns.
150 patients across Australia are testing out the machines as part of the CSIRO-led trial. Here are a few of their stories.
Janice and Bill
Victorian retiree Janice suffers from an irregular heartbeat, diabetes and low blood pressure – conditions that require twice weekly visits to her doctor and multiple hospital stays to be controlled. She also has diabetes related retinopathy which has caused her to lose most of her vision, making medical visits difficult for both herself and husband Bill.
Since using the telehealth monitoring system, Janice’s GP and hospital visits have reduced significantly, and she can better manage her symptoms to prevent hypoglycaemic episodes. Bill also has a clearer idea of how Janice is doing from day to day.
“If Janice’s blood pressure reading is particularly low, I can prevent any dizzy spells by getting her to sit down and giving her a glass of water. If her measurements are stable, I can pop out to do some shopping or walk the dog knowing that she should be fine on her own for a little while,” says Bill.
Jack has ischaemic heart disease and chronic obstructive pulmonary disease (COPD). This affects his airways causing breathlessness, a chronic cough and mucus build-up.
During a routine check of Jack’s telehealth monitoring data, his nurse Lay noticed that his ECG results were slightly unusual. This prompted the nurse to call Jack, who complained of shortness of breath. An appointment was made with Jack’s doctor for a full ECG, which turned out to be fine.
As a result of this episode, Jack’s nurse arranged to visit him at home to discuss a medication regime and teach him to use his medicated spray. This meant Jack could self-manage his shortness of breath and prevent unnecessary doctor visits.
75-year-old Frances (pictured top) has a respiratory condition called bronchiectasis. This can easily develop into a chest infection without early warning and lead to a stay in hospital.
Every day, Frances conducts a ten minute check up with the telehealth monitoring system in between washing up the breakfast dishes and getting ready to go out. A nurse at the other end of the internet connection checks Frances’ measurements, looking for any signs of early deterioration.
“I was surprised by the idea of self-monitoring at first, but now that I’m used to it, I think it’s a terrific idea. It has really helped me to better understand my health,” says Frances.
As Australia’s population ages and more demand is placed on our health system, telehealth can help reduce patient hospitalisation, and the related costs, by allowing patients to better manage their chronic diseases from home.
The Home Monitoring of Chronic Disease for Aged Care project is an initiative funded by the Australian Government.
CSIRO is participating in One in Four Lives: The Future of Telehealth in Australia event, at Parliament House this morning from 7:45am AEDT.
Media: Sarah Klistorner M: +61 477 716 031
By Ian Oppermann, Director, Digital Productivity and Services
When disaster strikes – such as January’s bushfire in Victoria or the recent cold spell that froze much of north America – it’s vital for emergency services to get the latest information.
They need to access real-time data from any emergency sites and command centres so they can analyse it, make timely decisions and broadcast public-service updates.
CSIRO thinks it has a solution in its high speed and high bandwidth wireless technology known as Ngara, originally developed to help deliver broadband speeds to rural Australia.
The organisation has announced a licensing deal with Australian company RF Technology to commercialise Ngara so it can be used to allow massive amounts of information to be passed between control centres and emergency services in the field.
There is already interest from agencies in the United States and it’s hoped that Australian agencies will soon follow.
Squeezing more data through
The technology will package four to five times the usual amount of data into the same spectrum. This will allow emergency services to send and receive real time data, track assets and view interactive maps and live high definition video from their vehicles. It’s a step in what has been a long journey toward an ambitious vision.
For years, the vision of the communications research community was “connecting anyone, anywhere, anytime” – a bold goal encompassing many technical challenges. Achieving that depended heavily on radio technology because only radio supports connectivity and mobility.
Over the years we designed ever more complex mobile radio systems – more exotic radio waveforms, more antenna elements, clever frequency reuse, separation of users by power or by spreading sequence and shrinking the “cell” sizes users operate in.
A research surge in the late 1990s and 2000s led to a wealth of technology developed in the constant drive to squeeze more out of radio spectrum, and to make connections faster and more reliable for mobile users.
This radio access technology became 3G, LTE, LTE-A and now 4G. Europe is working on a 5G technology. We’ve also seen huge advances in wireless local area networks (WLAN) and a strong trend to offload cellular network data to WLAN to help cope with the traffic flowing through the networks.
Demand for more keeps growing
Despite this, the data rate demands from users are higher than what mobile technology can offer. Industry commentators who live in the world of fixed communication networks predict staggering growth in data demand which, time tells us, is constantly underestimated.
We’ve even stretched our ability to name the volume of data flowing through networks: following terabytes we have exabytes (1018), zetabytes (1021) and yottabyes (1024 bytes) to describe galloping data volumes.
A few more serious problems arise from all of this traffic flowing through the world’s networks. The first is the “spectrum crunch”. We have sliced the available radio spectrum in frequency, time, space and power. We need to pull something big out of the hat to squeeze more out of the spectrum available in heavy traffic environments such as cities.
The second is the “backhaul bottleneck”. All the data produced in the radio access part of the network (where mobile users connect) needs to flow to other parts of the network (for example to fixed or mobile users in other cities).
Network operators maintain dedicated high capacity links to carry this “backhaul” traffic, typically by optical fibre or point-to-point microwave links. This works well when the backhaul connects two cities, but less well when connecting the “last mile” in a built-up urban environment.
When the total data volume which needs to be moved in terms of bits-per-second-per-square-metre rises into the range requiring backhaul capacities and is mobile, then some clever dynamic backhaul technology is needed.
As more of us carry yet more devices, and continue to enjoy high quality video-intensive services, we will keep pushing up our data rate demands on mobile networks. In theory, there is no known upper limit on the amount of data an individual can generate or consume. In practice, it depends on available bandwidth, the cost of data and the ability of devices to serve it up to us.
We have seen amazing progress in mobile data rates over the past decade. This trend will need to continue if we’re to keep pace with demand.
A new solution
To address the burgeoning data demand, and building on a strong history in wireless research, CSIRO has developed two major pieces of new technology – Ngara point-to-point (backhaul) and Ngara point-to-multi-point (access) technology. (Ngara is an Aboriginal word from the language of the Dharug people and means to “listen, hear, think”.)
The latter Ngara technology solves several big challenges over LTE networks through its “narrow cast” beam forming transmissions and smart algorithms which can form a large number of “fat pipes” in the air, reducing energy wastage of the radio signal, and increasing data rates and range.
It also enables wireless signals to avoid obstacles like trees, minimises the need for large chunks of expensive spectrum and allows agencies to dynamically change data rates where and when needed during an emergency.
In Australia we are looking at a field trial of Ngara in remote and regional communities to deliver critical broadband services such as health and education.
It’s the type of technology you’d expect on Batman’s utility belt – but you won’t find it in a DC Comics book about the world’s greatest detective. Instead, this bad boy is being used by our real-life heroes to fight crime.
It’s called Zebedee – a handheld laser scanner that generates 3D maps of all sorts of environments, from caves to factory floors, in the time it takes to walk through them.
The portable device works by emitting laser beams while rotating around a spring that continuously scans the environment, converting 2D measurements into a 3D field of view. In fact, it can collect over 40,000 range measurements in just one second. It could even create a 3D map of the Batcave in around 20 minutes.
But it’s never been used like this before.
For the first time, our real world crime fighters at the Queensland Police Service are using Zebedee to help piece together crime scene puzzles.
Crime scenes can be difficult to investigate. They’re often places like dense bushland, steep slopes or dangerous caves, which can make thorough sweeps of the scene both tough and time consuming. Using Zebedee, also known as ZEB1, police can now easily access these hard to reach places and map confined spaces where it may be difficult to set up bulky camera equipment and tripods. It also means less disturbance of the crime scene.
Using data collected by the scanner, police investigators can quickly recreate the scene on their computer in 3D, and view it from any angle they want. They can then locate and tag evidence to particular locations with pinpoint accuracy.
This Brisbane-born technology is now available commercially and this local application is a striking example of how 3D mapping can allow us to access locations and view angles previously out of our reach.
It will help our local detectives and crime fighters in the Police Service generate and pursue lines of enquiry for incidents like murder cases and car crashes. You could say that Zebedee puts the CSI in CSIRO.
We’re working on even more ways to adapt Zebedee for a range of other jobs that require 3D mapping, from security and emergency services to forestry and mining, even classroom learning.
Film makers may soon be able to use Zebedee technology to easily digitise actual locations and structures when creating animated worlds. Maybe the next Batcave we see at the movies will be more realistic than ever before, created by 3D mapping an actual cave.
The potential applications are endless.
By Ali Green and Sarah Klistorner
An estimated one million Australians have diabetes and this number is expected to double by 2025. About 60 per cent of these people will develop eye issues, like the diabetes-related disease retinopathy.
Diabetic retinopathy is one of the leading causes of irreversible blindness in Australian adults. The disease often has no early-stage symptoms and is four times more likely to affect indigenous Australians.
Just imagine if this disease was preventable.
During the past few months, our researchers have been working with Queensland Health and the Indigenous and Remote Eye Health Service (IRIS) on the Torres Strait Islands to set up a remote eye screening service – giving hundreds of people access to specialist eye care.
For people living in remote areas, travelling a 5 hour round trip for specialist medical care can be disruptive to their family and community. Transporting patients can also be expensive.
Our Remote-I system is saving patients from the long and sometimes unnecessary journey by utilising local clinicians to conduct routine 15 minute retinal screenings, often as part of scheduled health clinic visits. Our technology sends hi-res retinal images taken in the screenings to ophthalmologists in Brisbane via satellite broadband.
Previously, ophthalmologists would only be able to fit in a limited number of eye screenings and surgeries when they visited remote communities. Once fully implemented, a city-based ophthalmologist will be able to screen up to 60 retinal images per week with the help of Remote-I.
Preliminary results from a review of data collected at one location showed that only three out of 82 patients screened to that date had a sight-threatening condition and required an immediate referral. Previously, those other 79 patients not requiring referrals may have held up the queue while the specialist was visiting the remote community. With Remote-I, those who need immediate treatment or attention can already be first in line.
With only 900 practicing ophthalmologists in Australia, and a high demand for eye health services in remote locations, finding new ways to deliver health services to remote communities is vital to providing the best care when and where it’s needed.
By June 2014 the Tele-Eye Care trial will have screened 900 patients in remote WA and QLD. In addition to streamlining health care processes, the trial is collecting a lot of data.
And this is where the science gets interesting.
With patients’ consent, collected images will be used by the Tele-Eye Care project to study blood vessel patterns in retinas. Algorithms will then be designed to automatically detect particular eye diseases, which will aid diagnosis in routine screenings.
Even though tele-ophthalmology has been around for many years, this is the first time anyone has looked at image processing techniques to automatically detect eye defects in routine screening environments via satellite broadband.
We’re working hard to deliver better health outcomes for indigenous Australians. Being able to provide diagnoses on the spot will make a huge impact on delivering faster, more cost effective eye care services to the outback and prevent blindness.
This initiative is funded by the Australian Government.
Media contact: Ali Green +61 3 9545 8098
This week’s heatwave across southern Australia has reminded us of the serious dangers posed by grassfires. They might not sound that threatening, but these fires can travel at speeds of up to 25 kilometres per hour, and damage hundreds of hectares of land within a matter of hours.
So while many of us were enjoying our summer holidays, our team of fire scientists were hard at work with researchers and volunteers from Victoria’s Country Fire Authority (CFA) to help learn more about grassfire behaviour in Australia.
In a series of carefully designed, planned and monitored experiments, the research team lit controlled fires in grass fields near Ballarat, an hour west of Melbourne.
The aim was to safely gather new and thorough data about grassfire behaviour in different conditions. Experimental plots containing grasses at different stages of their life cycle were burned, while experts observed and various instruments measured things like the time it took for the fire to burn across the 40 x 40 metre plot.
Australian researchers have been looking into forest fires and bush fires for decades, but this is the first time in nearly 30 years since we’ve conducted research into grassfires.
Back in 1986 we ran similar experiments in the Northern Territory, which led to the development of our Grassland Fire Spread Meter. This tool is used by rural fire authorities across the country to predict the rate that a grassfire is likely to spread once it starts.
What remains unknown is at what stage of the grass’s life cycle it becomes a fire hazard, especially for the grass types found across southern Australia.
Today, we have technology like unmanned aerial vehicles (UAVs) to help gain a new, bird’s eye perspective of the fire’s progress, allowing us to analyse the burns with a whole new level of detail.
Controlled by our scientists, the robot quadcopters flew above the experimental burns, filming the fire as it spread through the grass. The vision, along with other data captured by thermal sensors, will be used to develop computer models that can replicate and predict grassfire behaviour.
The results will help fire authorities like the CFA better respond to grassfires, as well as improving how they determine Fire Danger Ratings and when to declare Fire Danger Periods in particular regions.
The timing of the experimental burns was critical. The crew waited until the weeks of late December and early January for safe temperature and wind conditions to ensure any fires they lit would be easy to control and contain. They also needed the grass curing levels to be at the right amount. Thankfully, this was all wrapped up before we, and the grasslands, were hot and bothered by the heatwave.
Check out this video for the how and why of the experimental burns:
By Carrie Bengston
We all feel quite virtuous when we do our bit for the environment – whether it’s taking our own bags to the shops, sorting our recycling or leaving the car at home. But have you ever thought about doing the same for your computer? We have – and we’ve got a pretty certificate to prove it.
Forgive us for bragging, but our supercomputer in Canberra, ‘Bragg’, has recently been named the world’s 10th most energy-efficient supercomputer in the Green500 list, which ranks the energy use of supercomputers according to performance-per-watt.
Bragg was named after Adelaide father-and-son physicists Lawrence and Henry Bragg, Australia’s first Nobel Prize winners. It handles our massive research data sets, does our complex computer modelling and simulates dynamic processes. This helps us make better decisions about things like water security, bushfire preparedness, materials analysis, and coastal water quality.
And it does all of this using less energy per MegaFLOP (a unit of supercomputer speed) than all but 9 of the world’s fastest computers. But how?
Well, Bragg is a GPU cluster which means it gets its speed from Graphic Processing Units (GPUs) initially designed for fancy graphics in computer games. A few years back, computer geeks realised GPUs could do more than handle images of shoot ‘em up games or Minecraft. They could also perform multiple tasks at once at a fraction of the price of traditional computers.
Our Bragg cluster has had three GPU upgrades during its four year life. It’s now over ten times faster and twice as energy efficient. This has kept it ‘up to speed’ to handle the workloads our scientists throw at it.
But it wasn’t that easy to get a top spot on the list. First we had to measure the number of double precision floating point operations per second (or FLOPS) by running a performance test known as the LINPACK benchmark.
Then we submitted this number (if you’re wondering, it was 167.5 TFLOPS or 167.5×10**12 FLOPS) to the TOP500 list – where we ranked at number 260. This qualified us to submit to the Green500 list. To do this, we ran the LINPACK benchmark while measuring the power consumption on the Bragg cluster in the Canberra Data Centre.
So our 167.5 TFLOPS using 71.01 kW of power gives us 2,358.69 MFLOPS/Watt i.e. about 2.36 billion calculations per watt. Confused?
To put it another way, if you have an old 100 watt light at home, Bragg can perform 236 billion calculations per second using that amount of power. Check it out in action:
Bragg isn’t our only cool supercomputer. The new iVEC Pawsey Centre in WA is Australia’s largest supercomputer, used for incredible data-intensive projects like the world’s largest radio telescope, the future Square Kilometre Array. It uses a recycled groundwater cooling system which will save around 38.5 million litres of water per year compared to traditional cooling methods.
December 1 – 6 is the 20th International Congress on Modelling and Simulation, MODSIM in Adelaide.
By James Davidson
Cylons, Skynet, HAL 9000, Agent Smith, Haley Joel Osment. With characters like these, it’s no wonder people are concerned about the intelligent machines of tomorrow. But is there really any reason to fear? The truth is artificial intelligence (AI) is proving to be quite helpful…at least so far.
Clever, self-learning computer systems are helping us answer some of the world’s biggest problems – like how to predict bushfire hotspots. Unlike traditional methods where our best guesses are subjective, intelligent computers can use machine learning to replicate events based on advanced pattern recognition.
This month, our researchers revealed an AI system that could help us plan for future fires. It’s based on artificial neural networks (ANNs) which have actually been around since the 80s. These models allow computers to learn from data and provide a pretty accurate estimate of future events, eliminating many assumptions.
Today, ANNs are being used for deep cognitive imaging, an advanced form of pattern recognition. Based on this idea, our team of machine learning experts have built a deep cognitive learning system using ANNs to predict fire incidents across Australia. This could be the first step in providing information for emergency services planners to decide where to focus future firefighting resources.
So how does it work?
To put it simply, a computer is shown an image that represents a set of data. It’s then shown another image that has resulted from the first. The computer doesn’t know how or why the two are related, but it learns to estimate the outcome based on the first input. Essentially, the computer learns the cause-and-effect relationship so it can also predict the effect side of the equation in different scenarios.
Our team trained their computer to learn the relationship between Australian climate maps and fire hotspots. To do this, they presented it with maps of Australia’s past climate using data from the Bureau of Meteorology. Next, they showed it maps of fire hotspots complied from satellite imagery data collected by NASA.
The computer wasn’t told how the two maps were related, other than the fact that the first map resulted in the second. But, almost magically, it was able to use the ANN to learn how to reproduce the fire maps.
Then, they got even trickier. They showed the computer a scenario based on Australia’s climate between 2001 and 2010. It was able to replicate the real world occurrence of fire hotspots with 90 per cent accuracy at the 5 x 5 kilometre scale. Not bad!
It’s early days for this AI, but unlike the scary smart machines of film fiction, this work poses no threat to human life. Instead, it could go a long way towards saving lives by improving our understanding of how different climate scenarios impact fire regimes across Australia.
We’re also working on a suite of other smart tools for disaster management and recovery. Learn more in our media release.
November 27-28 is the Building a System of Systems for Disaster Management event in Victoria. We’re looking at how Australia’s key agencies can improve the way they access vital information during emergencies.
Remember the ol’ days of dial-up internet? When you got disconnected every time the phone rang and used up all your drive space to download one little file? Man, life was hard.
Luckily in the 90s our peeps came up with a little something called WiFi – and hallelujah all of our first world problems were solved.
Using the same mathematics that astronomers initially applied to piece together the waves from black holes, the potential of WiFi became ‘patently’ clear to its inventors. Today, its myriad of applications have fundamentally changed how we think of and use technology in our daily lives. In fact, by the end of this year more than 5 billion devices will be connected to our WiFi patented technology. The discovery is one of our most successful inventions to date and is internationally recognised as a great Aussie science success story.
This infographic explains how WiFi technology was created and how it actually works (click for full size):
While WiFi was developed as part of our previous ICT Centre and Radiophysics Research Division, our main wireless networks laboratory is now a part of our new Computational Informatics Research Division and has approximately 50 researchers located at our Marsfield site in Sydney.
These days, we are working with industry partners around the world on new challenges such as using wireless tracking tools to help improve the performance of athletes and ensuring the safety of miners, firefighters and emergency service personnel. We’re also helping farmers monitor soil fertility, crop growth and animal health by integrating wireless networks with centralized cloud computing.
Learn more about how we patented Wireless LAN technology.
Media: Dan Chamberlain. P: +61 2 9372 4491 M: 0477 708 849 Email: firstname.lastname@example.org
By Carrie Bengston
For us, Movember isn’t just about blokes growing facial hair and raising funds for men’s health – it’s a chance to collect data and muck around with technology.
Computer fluid dynamicist Fletcher Woolard is more used to animating geophysical flows like tsunamis and landslides. But this month, he thought he’d try something a little different – animating mo growth.
Using his skills in computer simulation, he photographed day by day, millimetre by millimetre, follicle by follicle how his mo was growing – and turned it into a cool time lapse video.
In just four seconds, you can see Fletcher’s facial hair growing at around 400,000 times the normal speed.
Unfortunately his efforts were still a long way off Ram Singh Chauhan, who has spent over thirty years crafting an impressive 4.29 metre long moustache. But hey, it’s not bad for less than a month’s growth.
Perhaps not surprisingly, this isn’t our first attempt at capturing hair growth data.
Back in 2008, our image analysis team developed software to count and measure hair regrowth. It was designed to test the effectiveness of hair removal products more accurately – which is traditionally a manual (and pretty boring) process.
The software took digital images from a specifically designed scanner pressed on to the skin, and used smart algorithms to automatically look for the hairs. Despite initial interest from several hair replacement studios, it sadly never made it to product stage
But all is not lost. Luckily in today’s world of mobile wireless technology, there’s an app for that – the mo tracker.
For more information or to get involved in Movember, head to Movember Australia.
By Michael Brünig, Deputy Chief, Computational Informatics
There isn’t a radio-control handset in sight as a nimble robot briskly weaves itself in and out of the confined tunnels of an underground mine.
Powered by ultra-intelligent sensors, the robot intuitively moves and reacts to the changing conditions of the terrain, entering areas unfit for human testing. As it does so, the robot transmits a detailed 3D map of the entire location to the other side of the world.
While this might read like a scenario from a George Orwell novel, it is actually a reasonable step into the not-so-distant future of the next generation of robots.
A recent report released by the McKinsey Institute predicts the potential economic contribution of new technologies such as advanced robotics, mobile internet and 3D printing are expected to return between US$14 trillion and US$33 trillion globally per year by 2025.
Technology advisory firm Gartner also recently released a report predicting the “smart machine era” to be the most disruptive in the history of IT. This trend includes the proliferation of contextually aware, intelligent personal assistants, smart advisers, advanced global industrial systems and the public availability of early examples of autonomous vehicles.
If the global technology industry and governments are to reap the productivity and economical benefits from this new wave of robotics they need to act now to identify simple yet innovative ways to disrupt their current workflows.
The automotive industry is already embracing this movement by discovering a market for driver assistance systems that includes parking assistance, autonomous driving in “stop and go” traffic and emergency braking.
In August 2013, Mercedes-Benz demonstrated how their “self-driving S Class” model could drive the 100-kilometre route from Mannheim to Pforzheim in Germany. (Exactly 125 years earlier, Bertha Benz drove that route in the first ever automobile, which was invented by her husband Karl Benz.)
The car they used for the experiment looked entirely like a production car and used most of the standard sensors on board, relying on vision and radar to complete the task. Similar to other autonomous cars, it also used a crucial extra piece of information to make the task feasible – it had access to a detailed 3D digital map to accurately localise itself in the environment.
When implemented on scale, these autonomous vehicles have the potential to significantly benefit governments by reducing the number of accidents caused by human error as well as easing traffic congestion as there will no longer be the need to implement tailgating laws enforcing cars to maintain large gaps in between each other.
In these examples, the task (localisation, navigation, obstacle avoidance) is either constrained enough to be solvable or can be solved with the provision of extra information. However, there is a third category, where humans and autonomous systems augment each other to solve tasks.
This can be highly effective but requires a human remote operator or depending on real time constraints, a human on stand-by.
The question arises: how can we build a robot that can navigate complex and dynamic environments without 3D maps as prior information, while keeping the cost and complexity of the device to a minimum?
Using as few sensors as possible, a robot needs to be able to get a consistent picture of its environment and its surroundings to enable it to respond to changing and unknown conditions.
This is the same question that stood before us at the dawn of robotics research and was addressed in the 1980s and 1990s to deal with spatial uncertainty. However, the decreasing cost of sensors, the increasing computing power of embedded systems and the ability to provide 3D maps, has reduced the importance of answering this key research question.
In an attempt to refocus on this central question, we – researchers at the Autonomous Systems Laboratory at CSIRO – tried to stretch the limits of what’s possible with a single sensor: in this case, a laser scanner.
In 2007, we took a vehicle equipped with laser scanners facing to the left and to the right and asked if it was possible to create a 2D map of the surroundings and to localise the vehicle to that same map without using GPS, inertial systems or digital maps.
The result was the development of our now commercialised Zebedee technology – a handheld 3D mapping system incorporates a laser scanner that sways on a spring to capture millions of detailed measurements of a site as fast as an operator can walk through it.
While the system does add a simple inertial measurement unit which helps to track the position of the sensor in space and supports the alignment of sensor readings, the overall configuration still maximises information flow from a very simple and low cost setup.
It achieves this by moving the smarts away from the sensor and into the software to compute a continuous trajectory of the sensor, specifying its position and orientation at any time and taking its actual acquisition speed into account to precisely compute a 3D point cloud.
The crucial step of bringing the technology back to the robot still has to be completed. Imagine what is possible when you remove the barrier of using an autonomous vehicle to enter unknown environments (or actively collaborating with humans) by equipping robots with such mobile 3D mapping technologies. They can be significantly smaller and cheaper while still being robust in terms of localisation and mapping accuracy.
From laboratory to factory floor
A specific area of interest for this robust mapping and localisation is the manufacturing sector where non-static environments are becoming more and more common, such as the aviation industry. Cost and complexity for each device has to be kept to a minimum to meet these industry needs.
With a trend towards more agile manufacturing setups, the technology enables lightweight robots that are able to navigate safely and quickly through unstructured and dynamic environments like conventional manufacturing workplaces. These fully autonomous robots have the potential to increase productivity in the production line by reducing bottlenecks and performing unstructured tasks safely and quickly.
The pressure of growing increasing global competition means that if manufacturers do not find ways to adopt these technologies soon they run the risk of losing their business as competitors will soon be able to produce and distribute goods more efficiently and at less cost.
It is worth pushing the boundaries of what information can be extracted from very simple systems. New systems which implement this paradigm will be able to gain the benefits of unconstrained autonomous robots but this requires a change in the way we look at the production and manufacturing processes.
This article is an extension of a keynote presented at the robotics industry business development event RoboBusiness in Santa Clara, CA on October 25 2013.
My nine year old son has lived his whole life in a house that doesn’t have a cable connected to a telephone – not to mention to a laptop or mouse. This is largely because wireless technology has given us the freedom to live life wirelessly using devices like laptops, TVs and smartphones. And its popularity just keeps on growing. It’s estimated that there will be 5.8 million WiFi hotspots across the globe by 2015 and 800 million WiFi–enabled households by 2016.
Soon, wireless devices will be everywhere in our daily lives, measuring and optimising things that we never thought were possible – and we won’t even know it. Think high definition 3D video that streams seamlessly to tiny wireless devices without having to worry out about signal strength, coverage or network congestion. Whether you want to park your car without driving, feed your dog when you’re on holidays or program your fridge to automatically add milk to your shopping list when you’re running low – all of this is possible with wireless technology.
Today, we’re taking wireless technology out of our labs and working with a range of partners to solve important purposes. Our wireless ad-hoc system for positioning (WASP) technology is helping improve the performance of athletes and ensuring the safety of miners, firefighters and emergency service personnel. We’re also helping farmers monitor soil fertility, crop growth and animal health through integrating wireless networks with centralized cloud computing.
In twenty years’ time who knows how far we could get. Maybe all cars on the road will be tracked, integrated and controlled over wireless links. Just think – no more traffic jams or crashes. I bet Rhonda and Ketut will be happy with that.
With the increasing popularity and growth of wireless technology for business, residential and mobile users, there’s a big demand for new research and development in the future. And we’re chuffed to be at the forefront of these exciting changes.
Check out our infographic showing how we’re using Wi-Fi technologies today (click image for full-size):
Learn more about our work in wireless networks.
Iain Collings presented a keynote on the future of wireless research at The Australasian Telecommunications Networks and Applications Conference on Wednesday 20th November.
Media: Dan Chamberlain. P: +61 2 9372 4491 M: 0477 708 849 Email: email@example.com
By Flo Conway-Derley
Today marks the official operational launch of the iVEC Pawsey Centre — Australia’s newest supercomputer facility in Perth, Western Australia.
Supercomputing resources at iVEC’s Pawsey Centre will be available for data-intensive projects across the scientific spectrum, including radio astronomy, geosciences, biotechnology and nanotechnology.
In particular, a significant portion of the supercomputing power will be dedicated to processing radio-astronomy data from major facilities, such as our Australian SKA Pathfinder telescope.
ASKAP will need the processing power of the Pawsey Centre to crunch some serious data. When fully operational, about 250 terabytes per day will stream from the telescope, data which the supercomputer will need to process in more-or-less real time.
In 2009, our astronomers used the Australia Telescope Compact Array (ATCA) in Narrabri to create a picture of the galaxy Centaurus A. This galaxy is large — around 200 times the size of the full moon — and it took more than 1200 hours of data collection and 10,000 hours of computing to make the image.
With ASKAP and the Pawsey Centre, the image would take around ten minutes.
Some other interesting facts about the Pawsey Centre:
- It was named after Dr Joseph Pawsey, who is widely acknowledged as the father of Australian radio astronomy.
- It houses a supercomputer able to exceed one quadrillion operations every second, or one “petaflop”.
- It has 40 petabytes of storage capacity — that’s 40,000,000,000,000,000 bytes, or 223,000 DVDs.
- Once finished, the Centre will house 20 tonnes of computer equipment and 400 km of fibre optic cable within the 1000 sqm building.
- Half of the Pawsey Centre’s floorspace has been earmarked for the computing needs of the future international Square Kilometre Array (SKA) telescope project.
- A groundwater cooling system, developed by our CSIRO Geothermal Project, is used to cool the supercomputer, rather than water towers. This system brings water up from groundwater bores 140m deep and cycles 90 litres a second to cool the machine.
- The amount of water saved by using a groundwater cooling system to cool the Pawsey Centre supercomputer is equivalent to the amount of drinking water consumed in South Perth.
Note: CSIRO, as centre agent for iVEC, has led the development of the Pawsey Centre. It owns and maintains the building, which is constructed on CSIRO-owned land adjacent to the Australian Resources Research Centre facility. For more information about iVEC, visit http://www.ivec.org