The International Year of Crystallography is drawing to a close, and we’re not going to let it finish without showing you something about what crystallographers do. Which is not what most people would assume when they hear the word: there are crystals involved, but it’s not exactly the study of crystals as we generally think of them. It’s the study of how matter is organised, using crystals as a tool.
Now, naturally we want to know how matter is arranged. Apart from being very, very interesting to find out about, it also helps in many different fields, from drug delivery to materials science. In fact, it was crystallography that provided – controversially – the key to understanding the structure of DNA.
So assume you want to look at something in the greatest possible detail, seeing its smallest possible components. Obviously, you’d use a microscope. But there’s a limit to the smallness of things you can see that way: the wavelength of the light human eyes see. Visible light has a frequency of between roughly 400 and 700 nanometres, and can’t detect atoms, which are separated by 0.1 nanometres. This is the perfect frequency for X-rays.
We can’t make appropriate X-ray lenses to make x-ray microscopes to study molecules: we have to do it in a roundabout way. We beam X-rays onto crystals, scattering the rays, in just the same way that light reflects when it hits an object. Then we use a computer to reassemble the rays —the diffraction pattern —into an image. The diffraction of a single molecule would be so weak that we couldn’t get any meaningful information from it, so we use crystals, which have many molecules in an ordered array, to amplify the signal so we can see it. Crystals are highly ordered structures, made up of 1012 or more molecules, makes the x-ray diffraction patterns — the main tool of crystallography —possible to analyse.
Crystallographers were among the first scientists to use computers, and used them to do the advanced calculations needed to reassemble diffraction patterns into coherent images. That’s why it seemed fitting to name our supercomputer after the founders of crystallography – Lawrence and Henry Bragg. Lawrence was the first person to solve a molecular structure using x-ray diffraction.
Today we can not only view molecules in 3D, but also study the way they operate. Improvements in x-ray machines have also led to synchrotron facilities, which can produce far more efficient and precise beams.
And speaking of synchrotrons …
One of our crystallographers, Tom Peat, has deposited more than 120 structures in the Protein Data Bank using data collected at the Australian Synchrotron. They were all derived from crystals developed in CSIRO’s Collaborative Crystallisation Centre.
This is one of our favourite structures.
It’s the structure of AtzF. This enzyme forms part of the breakdown pathway for atrazine, a commonly used herbicide. We’re trying to understand enzymes better and use them for bioremediation – cleaning up environmental detritus such as pesticides and herbicides – and we’ve now solved the structures of four of the six enzymes involved in the atrazine breakdown pathway. We also look at protein engineering, to see if we can make these enzymes even more effective at cleaning up the environment.
Before we get to the crystal image, there are other steps on the way. First, someone has to grow the crystals (clone the protein, express it, purify it and crystallise it). Then it’s off to the Synchrotron to get a data set (many diffraction images in sequence). Here you can see an actual protein crystal.
The picture on the right is the diffraction image.
The crystallographers measure the intensity of the reflections (the dark dots). They combine that with the geometry and use some complicated maths (a Fourier Transform) to produce an electron density map. They then use that map to build a model.
Not all our crystallography work is in the same area. We also work on some pharmaceutical applications. One of our projects, with hugely important implications for human health, is on the design of desperately needed new antibiotics. We’ve been collaborating with Monash University, looking at the pathway that sulpha drugs (such as sulfamethoxazole)– the ones we used to treat bacterial infections prior to the discovery of penicillin – take to treat Golden Staph infections in humans. The aim is to design new antibiotics that target the same pathway. You can read a paper that describes our recent findings in the Journal of Medicinal Chemistry, and here’s a picture of what we’ve been doing.
We think this deserves its own Year. And we hope it’s clear just how important it is. Crystal clear.
Australia’s “ferals” — invasive alien plants, pests and diseases — are the largest bioeconomic threats to Australian agriculture. They also harm our natural ecosystems and biodiversity. Some, such as mosquitoes, also act as carriers of human diseases.
One method of controlling invasive plants and pests — known as biological control, or “biocontrol”— is to use their own enemies against them. These “biocontrol agents” can be bacteria, fungi, viruses, or parasitic or predatory organisms, such as insects.
To find biocontrol agents, we travel to the native home of invasive species and search for suitable natural enemies. After extensive safety testing, they are introduced into Australia.
But do they work?
Learning from the cane toad catastrophe
Cane toads, which were introduced in 1935 to control cane beetles in Queensland’s sugar cane crops, are probably the most infamous example of biocontrol going wrong in Australia.
But Australia’s borders were more open back then. To protect against such harmful mistakes, Australia now has world-leading biosecurity import regulations and an effective quarantine system.
To be allowed entry into Australia, a candidate biocontrol agent must be assessed using internationally-recognised protocols. This demonstrates that it will not pose unacceptable risks to domestic, agricultural, and native species.
A cost-effective solution
Other control methods, such as the use of poisons and mechanical removal, require continued reapplication. Many biocontrol agents of plants and insects, once established, are self-sustaining and don’t have to be reapplied.
Prickly pear is a perfect success story of biocontrol. The plant was introduced into Australia in the late 1770s and grown in a few areas of NSW and Queensland until it became invasive after rapidly spreading following the flood of 1893. Biocontrol was initiated in the early 1900s and the prickly pear moth, Cactoblastis cactorum, was introduced in 1926 from the pear’s native home in the Americas. Cactoblastis has been keeping prickly pear under control almost by itself to this day.
Since then, many more biocontrol agents have been introduced to control invasive plants. These include mimosa in our top end, bridal creeper in southern Australia, parthenium in Queensland and ragwort in Tasmania.
A series of cost-benefit analyses in 2006 revealed that for every dollar spent on biocontrol of invasive plants, agricultural industries and society benefited by A$23. This was due to increases in production, multi-billion dollar savings in control costs and benefits to human health.
Biocontrol has also proven to be the only effective way to significantly reduce European rabbits across Australia. Myxoma virus was released in 1950, followed by rabbit calicivirus in 1995, causing regular disease outbreaks in wild rabbits. Together, they have kept rabbit numbers well below the devastating pre-1950s levels.
It’s estimated that the benefit of rabbit biocontrol to agriculture is worth more than A$70 billion. This is the only example of a successful large-scale biocontrol program against a vertebrate pest anywhere in the world.
The initial costs of biocontrol programs are generally high. That’s because we have to find suitable candidate agents overseas, test them for safety in quarantine, and comply with regulations around release.
But once biocontrol agents are released and affect the invasive species across its range, follow up control costs are greatly reduced.
Biocontrol is not a ‘silver bullet’
Biocontrol will not solve all problems to do with invasive species.
Weather and climate can affect biocontrol agents, like all living organisms. These two factors can slow and even stop the agents building-up to sufficient levels to control the invasive species.
In the case of the two rabbit viruses, virus-host co-evolution has led to a decline in effectiveness of the viruses over time as they lost virulence and rabbits developed resistance to them. This is similar to how bacteria can develop resistance to antibiotics. As a result, we must continue to search for ways to counteract these effects.
Like a multi-drug cocktail, biocontrol agents must often be used together to knock out an invasive species. And while biocontrol rarely completely eradicates an invasive species on its own, it may control it enough to be able to use other methods at a lower cost.
And just because we use biocontrol, it doesn’t mean we don’t need good farm practices and land management, such as bush restoration, to ensure the recovery of ecosystems affected by invasive species.
Biocontrol is unlikely to be the solution where invasive species are very closely related to species that we value — cats, for instance. Feral cats have recently been in the media as the greatest threat to Australia’s mammals. But because they are the same species as the cherished family moggy, a biocontrol program would be highly controversial.
New biocontrol programs
The historic successes of biocontrol in Australia justify continued investment. For widespread invasive species, there are no alternatives as cost-effective that work across the vast landscapes where feral species roam.
For example, the European carp pest makes up 90% of the fish biomass in the Murray Darling river system. The most promising option being developed for large-scale control is the carp-specific koi-herpes virus that is in the final stages of testing (to make sure the virus only targets carp). Its proposed release in Australia will soon be open for public debate.
Another case is the recent release of a rust fungus from Mexico for the biocontrol of crofton weed in eastern Australia. This invasive plant smothers grazing systems and natural ecosystems, including on the hillsides of Lord Howe Island, a World Heritage Area. The expectation is that this new highly-specific rust fungus will significantly contribute to control of this plant, the way other rust fungi have successfully done in the past against other invasive plants.
After 100 years of history in Australia, biocontrol should continue to have a bright future given it is the only approach that is environmentally-friendly, cheap and effective.
This article was originally published on The Conversation.
It’s a hard life being a small farmer in sub-Saharan Africa. About 200 million people in the region are poor and undernourished. Most of them are smallholder farmers in rural areas, who rely on agriculture as their main source of food and income.
Part of the reason for their level of hardship is that the major staple crops, sorghum and cowpeas (which provide not just food but fuel and fodder for livestock) have low yields. Poor soil, low-quality seed, drought and disease all play their part.
Obviously, if these farmers could get greater productivity from their crops, they could have a secure supply of food, and possibly even be able to sell the excess and bring in some extra money. But conventional genetic improvement to increase yield is a slow process, and these farmers are hungry now.
So: how to make improved yields happen?
The Bill and Melinda Gates Foundation has just awarded us a $14.5 million grant to work on it. The five year project, in partnership with other world leading research teams from Switzerland, USA, Germany and Mexico, will develop tools to generate self-reproducing hybrid cowpea and sorghum crops.
What we’re planning to do is to develop high-yielding sorghum and cowpea crops that have seeds the farmers can save and grow, and which don’t decrease in quality or yield. And that’s going to mean making a very fundamental change to the way they’re bred – changing from sexual reproduction to asexual.
Hybrid crops can produce yield increases of 30 per cent or more, because of what’s known as hybrid vigour – basically that some crosses between two strains of crop will combine the favourable traits of both parents and be more successful than either.
Hybrid vigour is the same mechanism which produces the loveable labradoodle. A labradoodle puppy inherits the favourable traits from its purebred Labrador and poodle parents. However, two labradoodles won’t produce labradoodle puppies (they’ll be more Labrador-ish, or more poodle-ish). In just the same way, the seed from hybrid crops will not express the favourable traits. The puppy’s increased adorableness is of course a matter of personal opinion but it is a furry demonstration of hybrid vigour.
Unfortunately, current technologies to produce hybrid seed (and labradoodles) are expensive, and farmers need to buy new seed every year as the favourable traits only last one generation.
If we can develop self-reproducing hybrid cowpea and sorghum crops the farmers would then be able to self-harvest high-quality seed, giving them a more secure food supply and possibly even increased income from selling excess seed.
It’s a big challenge. As project leader Dr Anna Koltunow explains ‘It’s not going to be easy, otherwise it would have been done already. The idea of changing the plants’ reproductive process to an asexual one is a complex undertaking’.
The first stage of the project will involve developing the techniques that will allow cowpea and sorghum plants to reproduce asexually. This is lab-based work. If this stage is successful, African breeders and institutes will join the project for the subsequent phases.
Biological illustration has come on a bit from the days of Gould’s gorgeous illustrations of birds, or Leonardo’s Vitruvian Man. Today, with the help of big data and big graphics power, we can visualise things, not just at the molecular level, but at work.
But why – apart from because it’s beautiful and fascinating – do we do it? How is it helpful? What can it show us?
Obviously, we’ve been using rudimentary data visualisation for a very long time. Charts, maps, tables, graphs. All data visualisations, but not at the level we now find ourselves working at. As Sean O’Donoghue, from our Digital Productivity and Services Flagship, puts it, ‘Data visualisation is a new visual language; we need to become fluent in it to manage the complexity of computational biology’.
Let’s think about genomic data. The more we know, the more we need new tools to deal with the knowledge we have. And we now know a lot. We’ve got the ability to generate tremendous amounts of genomic data from sequencing. Analysing that data is now the roadblock to our being able to convert what we’ve found into something useable.
Obviously, some genome analysis can be done using automated processes. But that still leaves a lot that depends on human judgement, particularly in the early stages such as hypothesis formation. Our concentration – and eyes – frankly aren’t up to spotting something different in a field of As, Cs, Gs and Ts (and nothing else), that seems to go on forever. Think of Where’s Wally?, in monochrome, with one Wally hidden on a single page hundreds of times larger than book pages. And then imagine that finding the Wally you’re looking for could make a big difference to people’s lives.
If we can combine visual and automated analysis, the pairing becomes more powerful. Users can user can seamlessly look at their data and perform computations on it, refining their analysis with each step.
Visualising also helps us reason about complex data. Sometimes, a well-chosen visualisation can make the solution to a complex problem immediately obvious. That’s because of the way that visual representations simultaneously engage the eyes and the memory. When we look at a visual, our eyes and our brain work in parallel to take in new information, and break it into small chunks. Then both the eyes and the brain process the bits in their different ways to extract meaning. It works like this.
You’ve gone to the supermarket – not your usual one – to buy bananas. When you walk in, your eyes scan the layout. At the same time, your brain is processing the various sections of the layout, and telling your eyes to home in on the fruit and veg section. It does this by sending signals from memory about how fruits look. Your eyes then break the entire scanned area into parts, then scan each part until they (all but instantly) recognise the veggie section. The same process is repeated until you spot the bananas in the fruits section. Your eyes and memory do their own things but work in parallel.
We’ve used our brains to build tools that can help us discover more and more. But making sense of what they’ve found still depends on us and our limitations. Around half of the human brain is devoted – directly or indirectly – to vision. Visualising the vast streams of data lets us work with what we’ve got to make it something more than a hunt for a tiny needle in a monstrous haystack.
If you want to see more data visualisations, there are some beautiful ones at Vizbi.
By Angela Beggs
Last weekend saw the kick off of Good Beer Week. In celebration of this wonderful week, we thought we’d take the hop-portunity to talk about some science that could make it easier for brewers of the future to keep making the best tasting beer.
As all beer fanatics would know, most beer is made from four key ingredients. Barley, water, hops and yeast.
For a brew of the perfect pale ale, you need to extract the sugars from grains (usually barley) so that the yeast can turn it into alcohol and carbon dioxide, creating beer.
Simple, right? Wrong. There are many components of beer that affect the smell, taste and the way it feels when it hits your taste buds.
Our experts in microfludics have used a biosensor to measure the levels of maltose in beer with a high-tech “lab-on-a chip” we call the Cybertongue® technology.
Maltose or malt sugar, is a stepping stone in the process that converts the starch in barley into the alcohol and fizz in beer. Not much maltose is left at the end of the brewing process, but what there is contributes to the drink’s taste and body. Testing the levels of maltose in beer is something brewers do because too much of it can change the way the beer tastes.
How does it work? Well, our scientists have hijacked the inbuilt sensors that simple microbes use to find their food and re-purposed them to test for the presence of flavour molecules present in a wide range of food and beverages (including delicious beer).
To run the new test all it takes is a drop of liquid amber. In the lab, researchers mix the maltose biosensors with the ale within the tiny channels, of a special lab-on-a-chip – a transparent slice of plastic about the size of a credit card.
Once the sample has mixed, the measuring begins. Within a couple of minutes, the results are in! Measuring the level of maltose has been possible for many years, but doing it on a chip makes it much quicker and easier than it has ever been before.
This type of technology could be used to ensure quality and consistency across different batches of beer. And, as well as beer, maltose is found in other beverages, cereal, pasta, and in many processed products which have been sweetened, so there may be applications in those areas too.
The same biosensing technology can be used across many different types of beverages and foods, to measure nutritional properties and flavours, as well as guard against toxins or contaminants. In the future it might also be used to warn people with intolerances and food allergies to things like lactose, which is chemically similar to maltose.
Dr. Nam Le will be speaking on the Cybertongue® technology, not so much the beer, during May 27-30 at Biosensors 2014.
By Matthew Walker
They might be famous for their painful sting and delicious honey, but like many other insects, bees also produce a super strong silk. And we’ve found a way to recreate this for a range of everyday uses.
Unlike silk from spiders or silk worms, bee silk has a special molecular structure that our researchers have been able to reproduce in the lab. They can also give the silk special functionality by introducing additional or different proteins to the mix.
The result is ‘smart’ bee silk that can be turned into fibres, thick sponges or transparent films. This can be used for many different purposes from advanced aviation to wound repair and the replacement of human tissue.
Our recreation of bee silk is spawning a new generation of smart materials that can sense and respond to the environment. It has even been entered into a global innovation competition called LAUNCH, where scientists develop game-changing technologies to shape the future of fabrics.
You can check out all the finalists and vote for your favourite entry on the LAUNCH website.
Find out more about our silk gene research.
By Carrie Bengston
Last week, the prestigious US journal Proceedings of the National Academy of Sciences (PNAS) published a fascinating article about . . . slime. Not just any old slime but the slime layer made by bacteria as they grow and colonise surfaces like the back of your sore throat when you have a cold, or on medically implanted devices like catheters. Our scientists have been working with UTS to determine what the slime is made of. And the amazing thing is that it contains DNA.
Most of us think of DNA as the molecules in every cell nucleus that carry the genetic codes for our blue eyes or our big feet. But extracellular DNA (or eDNA) has been found in slime that some bacteria species produce to help form a layer that grows and expands. As you can see below, all those massing bacteria gliding over slime trails look a bit like traffic on roads. The eDNA-containing slime allows new ‘roads’ to be formed and the film of bacteria to spread efficiently.
Our role in the research was to use our computer vision techniques to translate the complex movements of swarming bacteria into a set of measurements other researchers could analyse. We used image analysis software to track the direction, speed and size of bacteria.
By forming a growing, interconnected slime layer or ‘biofilm’, bacteria are better able to resist antibiotic treatments. In the recent PNAS paper, the researchers wanted to better understand the role of extracellular DNA in biofilm development. So, they grew bacteria of a species called Pseudomonas aeruginosa between an agar gel and a transparent glass slide. They got spectacular movies of the biofilm spreading – both in the presence of extra-cellular DNA, and in its absence.
They found eDNA helps form the slime that guides the traffic flow. It co-ordinates the movement of the bacteria so that they move almost as one, rather than the bunch of individual cells they are. So despite the fact that DNA’s structure was announced 60 years ago, we’re still learning more about what it does today.
Future research could help us find new drugs or treatments that would target this extracellular DNA and may help to fight persistent infections. This means we can stay healthy and give those bacteria the heave-ho before they get moving.
For more information on how we’re working to keep you healthy, head to our website.