Thursday, 14 July 2016

Do not ban homoeopathy in the name of Science

The BBC reports on a petition to ban homoeopathy for pets, which is already singed by a 1000 British vets. The justification for this is that homoeopathy does not work. The vet who started the petition said: "It's been shown that homeopathy doesn't work, so it probably shouldn't be offered any more even if it is offered with good intentions."

Classical homoeopathy naturally does not work. It works with strong dilutions, some are actually so strongly diluted that you can show that most bottles do not contain anything.

Furthermore, homoeopathy is taken as pills or drops. This means that you can make a double blind randomized trail. You randomly give half the people the "medicine" and half a fake medicine (placebo). The patient and doctor both do not know who got what (double blind). At the end you analyse whether there was a difference between the medicine and the placebo. If you pass this trial, your medicine works, no matter how it was made, whether is was conventional or homoeopathic. Also for many traditional medicines it is not known (well) how they work, but the double-blind randomized trail shows that it does. Classical homoeopathic medicine has failed this foundational test.

However, that homoeopathy does not work is not sufficient reason to ban it. It also does no harm and in doubt we should always chose the side of freedom. Government action is there to fix important wrongs, not for stuff that just makes us uncomfortable.

Just sleep

Even if it does not work, homoeopathy may still do some good. Also placebos work. Patients heal faster with a placebo than without treatment. Medical science should work much more on how to optimize the placebo effect. The number of times a day you take the pill, its colour, taste, size, ...

In The Netherlands doctors often tell their patients to simply go home, sleep and if the symptoms do not go away come back in a week. I like that, I am Dutch. Isn't it a relief when the doctor tells you it is not something serious and you should just go to bed? Sounds fine to me.

Foreigners mostly hate it. If they do not get a pill they do not feel taken seriously and they can even get quite aggressive. I would not be surprised if their anger leads to an anti-placebo effect. Why not give them something homoeopathic? That is better than unnecessary medicine, which may do harm, and lot better than over-prescribing on antibiotics.

Homoeopathy or classical homoeopathy?

The vet in the BBC article gives the impression he really believes in homoeopathy. That could be problematic. I would avoid such doctors because they clearly do not understand how to evaluate evidence. But a doctor who pragmatically prescribes it in case someone should just go to bed would be fine.

It could also be that the medicines this vet prescribes are not classical homoeopathy, but herbal medicine, which is often marketed as homoeopathic. Those could naturally work, many conventional medicines come from substances that were used in traditional cures in healing traditions from all over the world. The active ingredient of aspirin famously comes from willow bark.

That is why I do not like intolerance directed against traditional medicine; we should study it, see if it works, try to understand why it works, how to reduce side effects and how to improve their effectiveness. In many cases we do not have clear evidence against it, like we have for classical homoeopathy, and even if the "explanation" of how the cures work makes no sense we should keep an open mind that the actual treatment does bring benefits.

If a "homoeopathic" herbal medicine would pass the double blind trial, you could sell it as a normal medicine if it works better than existing medicines. Thus for me it is a bad sign when a pill is marketed as homoeopathic. Although, I do not know whether it is true, but I once heard an ironic story about a pill that went through all the tests and was approved as conventional medicine, but then the marketing department decided it was better to sell it as a herbal cure.

Science and politics

There is a parallel to the climate debate. Science can tell you some of the consequences of continued fossil fuel use. Science can tell you classical homoeopathy does not work. Science cannot tell society how to respond to climate change. Science cannot tell whether we should ban homoeopathy.

As a citizen I feel we should transition to renewable energy. As a citizen I feel we can be tolerant towards homoeopathy.

Related reading

BBC News, Science & Environment: Vets: Ban the use of homeopathy in animals

Steven Novella, MD, academic clinical neurologist at Yale University School of Medicine and president and co-founder of the New England Skeptical Society want to ban homoeopathy for pets: Should We Ban Homeopathy for Animals?

* Top photo: Cats by Abdullah AlBargan, used with a Creative Commons Attribution-NoDerivs 2.0 Generic (CC BY-ND 2.0) license.
* Middle photo: Cat, MacDuff the cat by Kevin Dooley, used with a Creative Commons Attribution 2.0 Generic (CC BY 2.0) license.
* This post was a wonderful excuse to finally write a popular post with cute cat photos.

Thursday, 7 July 2016

Is it time to freak out about the climate sensitivity estimates from energy budget models?

Estimates of climate sensitivity using simple energy budget models tended to produce lower values than many other methods. Consequently they were loved by the mitigation sceptical movement, who seemed to regard these as the most robust of all methods. Part of their argument is the claim that these are “empirical” estimates, conveniently forgetting the simple statistical model the method uses, that they still require information from physical global climate models for the forcings, and that global climate models output also fit the “empirical” temperature change (and many other observed changes).

Before the last IPCC report the estimate for equilibrium climate sensitivity was between 2°C and 4.5°C with a best estimate of 3°C. I do not know of any explicit statement, but I have the feeling that the new studies with low estimates from energy budget models were the reason why the last IPCC report reduced the lower bound to 1.5°C. Since the reasons for the discrepancies were not understood the last IPCC report no longer gave a best estimate for equilibrium climate sensitivity.

The equilibrium climate sensitivity is defined as the equilibrium change in global mean near-surface air temperature after doubling the atmospheric concentration of carbon dioxide.

A Nature News and Views by Kyle Armour (2016) showed this week that three assumptions made in the simple energy budget models lead to strong biases.

1. This week Mark Richardson and colleagues (2016) showed that the temperature change is underestimated because we have few measurements in regions where the change is large, especially the Arctic. This masking problem creates a bias of 15%.

Furthermore, over the ocean, empirical estimates do not use the air temperature, but use the sea surface temperature instead; the water temperature is a much smoother field and can thus be estimated using many fewer samples, which is good because observations over the oceans are sparse. Above sea ice the air temperature is used. Thus this also means that the decrease in the ice cover need to be taken into account. The temperature trend of the air temperature over the ocean is also higher than the trend of the sea surface temperature. Both effects make the "observed" trend 9% smaller.*

2. Climate change is mainly due to increases in carbon dioxide concentrations, but also warming due to increases in methane concentrations, cooling due to increases in aerosols (small airborne particles) and changing due to land use changes. Half a year ago Kate Marvel and colleagues showed that these forcings do not have the same global effect as carbon dioxide and that, as a consequence, the energy balance models are biased low. Marvel and colleagues estimate that this makes the estimates of energy balance models 30% too low.

3. Kyle Armour and colleagues (2013) previous work showed that in the early warming phase climate sensitivity appears smaller than the true value you would get if you would wait till the system has returned to equilibrium. This leads to an underestimate of 25%.

Taking all three biases into account the best estimate from the energy balance models from around 2°C estimate becomes 4.6°C**; see Figure 1b of Armour (2016) reproduced below.

Climate sensitivity estimated from observations1 (black), and its revision following Richardson et al. (blue) then following Marvel et al. (green), and in red the revision for the time dependence (Armour). The grey histogram shows climate model values.

The equilibrium climate sensitivity from global climate models is about 3.5°C***, which is close to the best estimate from all lines of evidence of about 3°C. The "empirical" estimate of 4.6°C is now thus clearly larger than the ones of the global climate models.

Is that a reason to freak out? Have we severely underestimated the severity of the problem?

Probably not, there are many different lines of evidence that support an equilibrium climate sensitivity around 3, with a likely range from around 2 to about 4.5. That the simple energy balance models might now suggest a best estimate of around 4.6°C does not really influence this overall assessment. It is just one line of evidence.

That the energy balance climate sensitivity is minimally above the upper bound does not change this. These energy balance models have not been studied much and the biases are so large that the correction need to very accurate, while they are currently mostly based on single studies. It is quite likely that this value will still change the coming years. If this value still holds after a dozen more studies you may want to consider freaking out a little. How uncertain this bias corrected climate sensitivity is is illustrated by its wide distribution in the above graph with a 95% uncertainty range of 2.5-12.8°C.

[UPDATE. Gavin Schmidt mentions on twitter that it should also be studied whether these three factors are fully independent. While they seem to relate to different aspects there could be a link because spatial patterns and forcing efficacy are strongly related. Thus it would be valuable to make a study that considers all three biases in combination.]

The promotion of the cherry picked climate sensitivity of 2°C, or lower, was disingenuous. A similar promotion of a value of 4.6°C would be no better. (Someone promoting a climate sensitivity of 12.8°C deserves a place in statistical Purgatory.)

There are many other lines of evidence for an equilibrium climate sensitivity around 3, from basic physics, to global climate models, various climatic changes in the deep past and the climate response to volcanoes. Before accepting values far away from 3 we would need to understand the physics of the feedbacks that produce such deviations.

Figure 1 Ranges and best estimates of ECS based on different lines of evidence. Bars show 5-95% uncertainty ranges with the best estimates marked by dots. Dashed lines give alternative estimates within one study. The grey shaded range marks the likely 1.5°C to 4.5°C range as reported in AR5, and the grey solid line the extremely unlikely less than 1°C, the grey dashed line the very unlikely greater than 6°C. Figure taken from figure 1 of Box 12.2 in the IPCC 5th assessment report (AR5). Unlabeled ranges refer to studies cited in AR4. The figure in the review article by Knutti and Hegerl (2008) presented by Skeptical Science is also a very insightful overview.

The likely range of possible climate sensitivity values has been between 2°C and 4.5°C since the 1990s. That does not sound like much progress. However, we now have many more lines of evidence and those lines have been much better vetted. Thus we can be more sure nowadays that this range is about right. A large part of the uncertainty comes from cloud and vegetation feedbacks. Having worked on clouds myself, I know that these are very difficult problems. Thus I am not hopeful that the uncertainty range will strongly decrease the coming decade or maybe even decades.

We will have to make decisions in the face of this uncertainty. Like any decision in a complex world.


* The temperature trend of the air temperature over the ocean is 9% higher than the trend of the sea surface temperature in the CMIP5 models. For most models the top layer is 10 m deep. For those models with a higher vertical resolution the trend is only 8% higher. The difference is small and not statistically significant, but the effective resolution of numerical models is normally larger than the nominal resolution, thus I would not be surprised if studies with dedicated high resolution models may lead to estimates that are a few percent points lower.

** If we simply combine all these biases: 1.24 (Richardson) * 1.30 (Marvel) * 1.25 (Armour) we get that the simple energy balance models are biased by as much as a factor 2. Taking this into account could suggest increasing the best estimate from the energy balance models from around 2oC to around 4oC. Because of the uncertainty around the estimates and the thick tails, the estimate becomes 4.6°C. See Figure 1b of Armour (2016).

*** The ensemble of global climate models of the CMIP5 project have an average climate sensitivity of 3.5°C with a 95% uncertainty range of 2.0-5.6°C (Geoffroy, et al. 2013).

**** Many thanks to Kyle Armour and And Then There’s Physics for many helpful hints and comments. Any errors are naturally mine.

Related reading

Skeptical Science: How sensitive is our climate?

Climate dialogue: Climate Sensitivity and Transient Climate Response

Fans of Judith Curry: the uncertainty monster is not your friend

Tough, but interesting for scientists: Andrew Dessler talk at Ringberg15 on why the equilibrium climate sensitivity exceeds 2°C.


Armour, Kyle C., 2016: Projection and prediction: Climate sensitivity on the rise. Nature Climate Change, News and Views, doi: 10.1038/nclimate3079.

Armour, Kyle C., Cecilia M. Bitz and Gerard H. Roe, 2013: Time-Varying Climate Sensitivity from Regional Feedbacks. Journal of Climate, doi: 10.1175/JCLI-D-12-00544.1

Geoffroy, O., D. Saint-Martin, G. Bellon, A. Voldoire, D.J.L. Olivié and S. Tytéca, 2013: Transient Climate Response in a Two-Layer Energy-Balance Model. Part II: Representation of the Efficacy of Deep-Ocean Heat Uptake and Validation for CMIP5 AOGCMs. Journal of Climate, 26, pp. 1859- 1876, doi: 10.1175/JCLI-D-12-00196.1.

Marvel, K., G.A. Schmidt, R.L. Miller and L.S. Nazarenko, 2015: Implications for climate sensitivity from the response to individual forcings, Nature Climate Change, 6, pp. 386-389. 10.1038/nclimate2888.

Richardson, Mark, Kevin Cowtan, Ed Hawkins and Martin B. Stolpe, 2016: Reconciled climate response estimates from climate models and the energy budget of Earth. Nature Climate Change, doi: 10.1038/nclimate3066. If you cannot read this article at Nature, you can go there via The Guardian, which has a special link that allows everyone to read (not download) the article. See also the News and Views on this article by Kyle Armour.

Otto, A., F.E.L. Otto, O. Boucher, J. Church, G. Hegerl, P.M. Forster, N.P. Gillett, J. Gregory, G.C. Johnson, R. Knutti, N. Lewis, U. Lohmann, J. Marotzke, G. Myhre, D. Shindell, B. Stevens, and M.R. Allen, 2013: Energy budget constraints on climate response", Nature Geoscience, 6, pp. 415-416. 10.1038/ngeo1836.

Thursday, 30 June 2016

The EU, refugees and migration

An alternative to Brexit that helps refugees and workers

Summary. This post proposes an alternative to Brexit that makes all EU citizens better off and helps refugees better. Let's add to the refugee convention the condition that if you help refugees in the region, you do not have to house them at home. The real problem of migration is not the migration itself, but the reduction in bargaining power of the average worked. Rather than restrict the freedom of EU citizens to work elsewhere, we could also improve the bargaining power of the workers in other ways. If we then also stop the neo-liberal projects TTIP, CETA and Euro, the EU becomes an attractive way to collaborate for all European citizens.

Brexit, Geert Wilders, Nigel Farage, Marine Le Pen and thousands of refugees drowning in the Mediterranean. We should talk about the EU, refugees and migration.


The European Union started as a peace project, as a collaboration based on two industries that were crucial for war: coal and iron. It is often still sold as a peace project. That is certainly an aspect, but I think that this part is oversold when people point to Europe's violent past. The EU sure helps. However, also outside of the EU the frequency of international conflicts is decreasing. The benefits of war have decreased, most capital is nowadays in humans and organisation and cannot be easily plundered. The costs of war have also increased with nuclear and chemical weapons. The spread of democracy and the absence of war itself makes war less likely.

If leaders and countries acted rationally the EU might no longer be necessary for peace in Europe. Marine Le Pen in France, Geert Wilders in The Netherlands, Nigel Farage in England and Donald Trump in the USA make clear that we should not count on every leader making a cost benefit analysis. The wars in Yugoslavia and Ukraine also warn us that war is possible in Europe. Peace is one of the benefits of the EU and one reason why right-wing extremists do not like it.

The main benefit of the EU is that is allows the citizens of Europe to collaborate and stand up against economic powers. Environmental problems belong to this category. Where powerful companies pollute to make more money, while people with less power have to deal with the consequences. This power abuse increases inequality. On a national level, it is the role of the government to solve such problems. The polluter can, however, threaten to go to another country. International collaboration by setting environmental standards make such threats less credible and makes it easier for governments to serve their populations.

Many environmental problems are naturally also international, for instance, pollution of large rivers and acid rain, and natural candidates for collaboration to avoid international conflicts. A study just out this week tried to estimate the impact of European political measures to reduce air pollutants. It found that:
The reduction in PM2.5 concentrations [very small particles in the air] is calculated to have prevented 80 000 (37 000–116 000, at 95% confidence intervals) premature deaths annually across the European Union, resulting in a perceived financial benefit to society of US$232 billion annually (1.4% of 2010 EU GDP).
Those 80 thousand bodies and 1.4% of GDP is for small particles alone. Add to this all the other pollutants, workers rights and consumer protection. National laws would on average be less strict because firms would nationally have a stronger negotiation position. More people would die, more economic damage would be done. Socializing losses is a money making machine. In this case avoid investments in cleaner technology or selling more cheaper lower quality products increase the private gains at our costs.

People need to collaborate to reduce tax competition between countries. The rich and especially their money are more mobile and they can threaten democracies to pay their taxes elsewhere if the rates for the rich and large companies do not go down. That means that the lower 99%, you and me, will have to pay more, which is why incomes did not increase for most groups in the last decades, while most of the new wealth went to the super rich. The EU should coordinate taxes much more and especially get rid of national tax tricks to allow foreigners to pay less taxes than local people. The EU does this too little, but without the EU we can stop dreaming of achieving more justice here.

What amazes me most about the Leave campaign in the UK is that they managed to portrait the EU as the establishment and themselves as the defenders of the normal man. In reality both campaigns had their elites behind them and leaving the EU would make the UK establishment more powerful. Rupert Murdoch supported the Leave campaign because politicians in London do what he tells them to do. Last time I looked Rupert Murdoch was member of the establishment and not a working man trying to get by.

Rupert Murdoch supported the Leave campaign because politicians in London do what he tells them to do

Yes, the EU also does terrible things. A democratic institution will not always follow your preference; if you like that, try to become a dictator. The establishment naturally also sees that the EU is their main opponent and lobbies to make the EU do what they would like. This is facilitated by the fact that the media does not report much on the EU and much can thus be done behind the back of the people, which means that politicians do not have to fear losing their jobs for doing the bidding of the establishment. We should pay more attention and organize to make sure that our lobbies are also in Brussels.

Well know examples of terrible neo-liberal EU projects are the trade agreements TTIP and CETA and the Euro. Europe is not a banana republic, our courts do their job well and there is no need for special TTIP private courts so that corporations can threaten governments who want to improve living conditions. It is an assault on our democracies. If the EU presses through TTIP or CETA, I will stop being reasonable and from then on I will be anti-EU.

The Euro is a mess, it has many, many problems. We should slowly move out of it.


The increase in the number of refugees is not just an EU problem. Improved information and travel possibilities means that more people are travelling further to find a save home.

It is not just an EU problem, but currently the consequences of the Bush-Blair war against Iraq are producing large refugee streams into the EU and tensions within the union. A closer EU foreign policy could have prevented this mess. (The misinformation campaign for the Iraq war is comparable with the Brexit campaign; in both cases the Anglo-American population did not do their due diligence.)

Most countries on Earth have signed the [[Geneva Convention relating to the Status of Refugees]], which obliges them to act humanely and accept refugees. The duty to protect refugees is international law.

While most people we have an empathetic side that wants to help people in need, we also have a tribal side and many do not like too many people from other groups coming to stay with us. If you put yourself in the position of a native American that instinct can make sense. In retrospect it was a monumental mistake to let Columbus and Co. get away alive.

In response to the growing numbers, the refugee convention has been hollowed out by making it very difficult to enter a country and ask for asylum, as well as by the principle of secure third countries and send refugees away without investigating their case. As a consequence many thousands of people drown in the Mediterranean trying to reach Europe and Australia has set up an disgusting system of lawless concentration camps.

I would propose to add two principles to the convention:
1. That when a country helps refugees in the region they come from, they are no longer obliged to house them in their own country. But only then.
2. That refugees can also send a request for asylum by mail, so that people are no longer dying trying to cross the border.

The current refugee crisis started with insufficient aid for the refugee camps along the Syrian border where people were literally hungering. This new legal principle would make such cases of neglect less likely, because it would have consequences. If we do our part to help locally, it should be no problem that asylum can be requested by mail, because they could then be rejected.

Helping refugees locally is also better for them. We may be rich, but a refugee who is used to a normal culture will have a hard time accustoming to the cold and impersonal European societies. Next to all the other culture shocks. Even without considering cultural differences, staying in the region makes it easier to maintain social ties and to go back home when the problems are over.

Staying in the region makes it easier to maintain social ties and to go back home

We will not be able to help everyone locally, especially in case of small groups or individuals. A gay man who is threatened in Russia is helped most easily by granting him asylum in Europe.

These two new principles would strengthen the right of asylum, help refugees better and reduce the number of refugees coming to Europe. Racists will not like this solution, but for the majority who experience a mix of empathy and concern, this should be a good solution. For people in favour of a multi-cultural society this is a good solution because it helps refugees better (and there will be enough diversity left).


Refugees and migrants are often seen as one category, but they are fundamentally different. A refugee needs our help. To allow the partner and children of a refugee to live together would be migration, but seems to be a no-brainer as well for people with some empathy.

Economic migration is a different case. Even when it is good for a country, it may not be good for all segments of society. One reason we have democracy is to make sure that all interests are represented.

For the elite migration is mostly nice. By definition the migrants see migration as a benefit. And the elite has other options, thus if they migrate that is normally because they see clear benefits. Even before EU citizens could work everywhere in Europe it was normally possible for scientists to work elsewhere because of the importance of migration for science. Science is highly specialized; there is no labour market in Germany for my specialization.

Many other professions are similarly specialised and professionals with high salaries were normally allowed to work in another country. Also a sufficiently wealthy pensioner will be happy to be allowed to migrate to another (warmer) country. Living in another country a few years can be very enriching.

If you are less well off, the possibility of migration of cheap labour can be used by firms to reduce your bargaining power and you may end up with an even lower salary or without a job. The region the migrant comes from looses a valuable labourer. Migration can thus be used to increase inequality even more. Even for scientists from wealthy countries migration makes the negotiation positions weaker and thus labour conditions worse. But it is good for science and for scientists from poorer countries.

Salaries are determined by bargaining power, not by productivity, which is undefined for individuals in nonlinear production processes

Especially within the EU, I would be in favour of allowing everyone to work where they would like to. Freedom should be our default. In return for this benefit, the elite should compensate the disadvantages for the rest of society by improving the negotiation position of workers. One may think of migration restrictions for some professions, stronger unions, redistribution of wealth, programs for retraining, job guarantees for the unemployed and humane treatment of unemployed people.

A new European Union

There are many benefits of collaboration. The people of Europe need to collaborate to have the power to stand up against ever larger economic powers. There is no reason why this collaboration needs to be so intensive that the EU would become a nation itself. People's interests and customs differ and power is best exercised close to the people. We should only collaborate on large scales where this has a clear benefit.

The Euro makes inequality worse and it creates a lot of negative political energy in the EU that avoids other positive changes. Let's get rid of it. Now that the worst financial crisis is over, this is a good time to start a slow transition.

If we change the refugee convention to alternatively help refugees in the region where they come from, we can help them better than now, less will die on their way to Europe and another problem that creates bad blood in Europe would be gone.

Because refugees and migration are often seen as one problem, a reduction in the number of refugees may also reduce problems people have with migration. Still we should not be blind to the large difference in interests between the elites and the rest of society when it comes to migration. A compromise between the groups may be improving the negotiation position of workers.

Overarching above it all: You are not more pro Europe the more you would like the EU to replace the old nations. When Juncker sees the Brexit as a great opportunity to build a European nation and force the fast introduction of the Euro in every country, he is pro Europe. When I reject that uncreative vision of Europe and see the EU as a way for the people of Europe to collaborate, I am also pro Europe. Just like people argue nationally what the role of government is, we should have an open discussion in the EU about where collaboration is fruitful and possible given our differences.

For me, the most valuable innovation of the EU is actually that it is a twitter. That is the reason why nations all over the world are building similar regional collaborations. They would not if the aim would the end of their nations and only building a larger more anonymous nation. Europe should be proud of its queer identity.

Related reading

Brexit is great news for the rest of the EU. Britain has not yet come to terms with its own irrelevance, and would only have got in the way of plans to create a more democratic pooling of sovereignty.

* Top photo: EU Grunge Flag, Attribution 2.0 Generic (CC BY 2.0).

Photo Auschwitz: Arbeit Macht Frei, Attribution-ShareAlike 2.0 Generic (CC BY-SA 2.0)

Map of Regional Organizations: CC BY-SA 3.0.

Thursday, 23 June 2016

Four wonderful climate science podcasts you need to know

[UPDATE: How is this Buzzfeed headline?]

For some years I was a regular listener of EconTalk, where the economist Russ Roberts would interview a colleague, typically about a recent book or article. Roberts is a staunch libertarian and the interviews with fellow libertarians are worse than listening in on drunk men agreeing with each other in a bar at 4am, but many other interviews with economists who went out in the world and had studied reality were wonderful. I learned a lot about the world view of this special tribe and something about the limits on how we organize society.

Podcasts are a nice way to learn. The debate makes otherwise maybe more boring engaging and you can listen to it while commuting, walking or doing the household chores.

I tried to get a few people enthusiastic about doing such a podcast for climate science and even in the end considered doing it myself. But there is no need any more. Suddenly a wealth of really good climate science podcasts has sprung up.

Warm Regards

The newest podcast is by Eric Holthaus, journalist at Slate. It is called Warm Regards. He has as co-hosts climate scientist and Ice Age ecologist Jacquelyn Gill and New York Times science blogger Andrew Revkin. They are so new, in fact, that I could not listen to their first podcast yet: "How Do We Talk About Climate Change?". They are now also on iTunes.

[UPDATE. While preparing my diner, I listened to the podcast. Really enjoyed it. Good voices. Good sound. Professionally made. They introduced themselves, what they work on and why they care about climate change. The main topic was science communication and they emphasised that a good relationship with the listener is much more important than details that are quickly forgotten. Revkin liked talking to mitigation sceptics, the other two take the more productive route of trying to talk as much as possible to people who are wiling to listen and consider the arguments. The app Block Together is a good way to keep lines of communication open with decent people on twitter by very efficiently blocking harassing accounts. I also use it and can highly recommend it.

Personally I would add that we should not overestimate the importance of science communication. Outside of Anglo-America scientists are much less active in communicating climate change, but we have nearly no problems with mitigation sceptical movements. The difference is a working political system and better press. Talking about climate and science is what I do best, but if you have the option, it is probably better to invest your time in getting money out of US politics and building up a free and democratic press, for example by supporting membership supported media channels.]

Climate History Podcast

The Climate History Podcast is hosted by Dr. Dagomar Degroot, the founder of and co-founder of the Climate History Network. Even if society has changed a lot, we can learn from how humans have responded to the small climatic changes in the past. This initiative is just three podcasts old and the one I listened to, on the little ice age, is really interesting. First three titles are:

1. Climate Change and Crisis: Lessons from the Past
2. The History of Climate Change with Professor Sam White
3. Archaeology in the Arctic: Reconstructing the Consequences of Climate Change in the Far North

You can listen to it on iTunes project and download and listen to the podcast at SoundCloud.

Mostly Weather

The UK MetOffice produces the podcasts Mostly Weather. So officially it is about weather, but most topics are actually dual-use science, important for both weather and climate. (Mitigation sceptics often do not seem to know that meteorology is bigger than climatology.) It is made with love by climate scientists Doug McNeall, Niall Robinson and Claire Witham.

It is aimed at a general audience, but also a scientist can still learn something. I learned from their first podcasts on the history of weather forecasting that this started over the ocean: there it is most important and easier to do. Other podcasts were on the elements: clouds, snow, lightning and about the structure of atmosphere and they just had a series on weather forecasting.


For me as a scientist, the clear favorite is Forecast. It is made by Michael White, editor for climate topics at the scientific journal Nature. He makes it as a private project, but naturally he has access to the best and the brightest as editor and a good understanding of the climate system. This makes for in depth interviews on the science, but he also talks a lot about the serendipitous personal and scientific histories of the scientists. I have the feeling, many non-scientists will be able to understand the interviews, but admit I am not a very good judge of this.

The last interview was with Gabi Hegerl, the woman who discovered climate change (the first to do an attribution study). Other names people may recognize are: Reto Knutti, astronaut Piers Sellers, Chris Field, Bjorn Stevens, Kim Cobb, Mat Collins. Oh and a modeller from NASA GISS. Gavin Schmidt.

Have fun listening. Let me know if I missed something and which podcasts you like most.

Sunday, 8 May 2016

Grassroots scientific publishing

These were the weeks of peer review. Sophie Lewis wrote her farewell to peer reviewing. Climate Feedback is making it easy for scientists to review journalistic articles with nifty new annotation technology. And Carbon Brief showed that while there is a grey area, it is pretty easy to distinguish between science and nonsense in the climate "debate", which is one of the functions of peer review. And John Christy and Richard McNider managed to get an article published, which I would have advised to reject as reviewer. A little longer ago we had the open review of the Hansen sea level rise paper, where the publicity circus resulted in a-scientific elements spraying their graffiti on the journal wall.

Sophie Lewis writes about two recent reviews she was asked to make. One where the reviewers were negative, but the article was published anyway by the volunteer editor and one case where the reviewers were quite positive, but the manuscript was rejected by a salaried editor.

I have had similar experiences. As reviewer you invest your time and heart in a manuscript and root for the ones you like to make it in print. Making the final decision naturally is the task of the editor, but it is very annoying as a reviewer to have the feeling your review is ignored. There are many interesting things you could have done in that time. At least nowadays you get to see the other reviews and hear the final decision more often, which is motivating.

The European Geophysical Union has a range of journals with open review, where you can see the first round of reviews and anyone can contribute reviews. This kind of open review could benefit from the annotation system used by Climate Feedback to review journalistic articles; it makes reviewing easier and the reader can immediately see the text the review revers to. The open annotation system allows you to add comments to any webpage or PDF article or manuscript. You can see it as an extra layer on top of the web.

The reviewer can select a part of the text and add comments, including figures and links to references. Here is an annotated article in the New York Times that Climate Feedback found to be scientifically very credible, where you can see the annotate system in action. You can click on the text with a yellow background to see the corresponding comment or click on the small symbol at the top right to see all comments. (Examples of articles with low scientific credibility are somehow mostly pay-walled; one would think that the dark money behind these articles would want them to be read widely.)

I got to know annotation via Climate Feedback. We use the annotation system of and this system was actually not developed to annotate journalistic articles, but for reviewing scientific articles.

The annotation system makes writing a review easier for the reviewer and makes it easier to read reviews. The difference between writing some notes on an article for yourself and a peer review becomes gradual this way. It cannot take away having to read the manuscript and trying to understand it. That takes most time, but this is the fun part, reducing time time for the tedious part makes it more attractive to review.

Publishing and peer review

Is there a better way to review and publish? The difficult part is no longer the publishing. The central part that remains is the trust of a reader in a source.

It starts to become ironic that the owners of the scientific journals are called "scientific publishers" because the main task of a publisher is nowadays no longer the publishing. Everyone can do that nowadays with a (free) word processor and a (free) web page. The publishers and their journals are mostly brands nowadays. The scientific publisher, the journal is a trusted name. Trust is slow to build up (and easy to lose), producing huge barriers to entry and leading to near monopoly profits of scientific publishing houses of 30 to 40%. That is tax-payer money that is not spend on science and promotes organization that prefer to keep science unused behind pay-walls.

Peer review performs various functions. It helps to give a manuscript the initial credibility that makes people trust it, that makes people willing to invest time in it to study its ideas. If the scientific literature would be as abominable as the mitigation skeptical blog Watts Up With That (WUWT) scientific progress would slow down enormously. At WUWT the unqualified readers are supposed to find out themselves whether they are being conned or not. Even if they would do so: having every reader do a thorough review is wasteful; it is much more efficient to ask a few experts to first vet manuscripts.

Without peer review it would be harder for new people to get others to read their work, especially if they would make a spectacular claim and use unfamiliar methods. My colleagues will likely be happy to read my homogenization papers without peer review. Gavin Schmidt's colleagues will be happy to read his climate modelling papers and Michel Mann's colleagues his papers on climate reconstructions. But for new people it would be harder to be heard, for me it would be harder to be heard if I would publish something about another topic and for outsiders it would be harder to judge who is credible. The latter is increasingly important the more interdisciplinary sciences becomes.

Improving peer review

When I was dreaming of a future review system where scientific articles were all in one global database, I used to think of a system without journals or editors. The readers would simply judge the articles and comments, like on Ars Technica or Slashdot. The very active open science movement in Spain has implemented such a peer review system for institutional repositories, where the manuscripts and reviews are judged and reputation metrics are estimated. Let me try to explain why I changed my mind and how important editors and journals are for science.

One of my main worries for a flat database would be that there would be many manuscripts that never got any review. In the current system the editor makes sure that every reasonable manuscript gets a review. Without an editor explicitly asking a scientist to write a review, I would expect that many articles would never get a review. Personal relations are important.

Science is not a democracy, but a meritocracy. Just voting an article up or down does not do the job. It is important that this decision is made carefully. You could try to statistically determine which readers are good at predicting the quality of an article, where quality could be determined by later votes or citations. This would be difficult, however, because it is important that the assessment is made by people with the right expertise, often by people from multiple backgrounds; we have seen how much even something as basic as the scientific consensus on climate change depends on expertise. Try determining expertise algorithmically. The editor knows the reviewers.

While it is not a democracy, the scientific enterprise should naturally be open. Everyone is welcome to submit manuscripts. But editors and reviewers need to be trusted and level headed individuals.

More openness in publishing could in future come from everyone being able to start a "journal" by becoming editor (or better by organization a group of editors) and try to convince their colleagues that they do a good job. The fun thing about the annotation system is that you can demonstrate that you do a good job using existing articles and manuscripts.

This could provide real value for the reader. Not only would the reviews be visible, but it would also be possible to explain why an article was accepted, was it speculative, but really interesting if true (something for experts) or was it simply solid (something for outsiders). Which parts do the experts debate about. The debate would also continue after acceptance.

The code and the data of every "journal" should be open so that everyone can start a new "journal" with reviewed articles. So that when Heartland offers me a nice amount of dark money to start accepting WUWT-quality articles, a group of colleagues can start a new journal and fix my dark-money "mistakes", but otherwise have a complete portfolio from the beginning. If they would have to start from scratch that would be a large barrier to entry, which like the traditional system encourages sloppy work, corruption and power abuse.

Peer review is also not just for selecting articles, but also to help making them better. Theoretically the author can also ask colleagues to do so, but in practice reviewers are better in finding errors. Maybe because the colleagues who will put in most effort are your friends who have to same blind spots? These improvements of the manuscript would also be missing in a pure voting system of "finished" articles. Having a manuscript phase is helpful.

Finally, an editor makes anonymous reviews a lot less problematic because the editor could delete comment where the anonymity seduced people into inappropriate behavior. Anonymity could be abused to make false attacks with impunity. On the other hand anonymity can also provide protection in case of large power differences in case of real problems.

The advantage of internet publishing is that there is no need for an editor to reject technically correct manuscripts. If the contribution to science is small or if the result is very speculative and quite likely to be found to be wrong in future, the manuscript can still be accepted but simply be given a corresponding grade.

This also points to a main disadvantage of the current dead-tree-inspired system: you get either a yes or a no. There is a bit more information in the journal the author chooses, but that is about it. A digital system can communicate much more subtly with a prospective reader. A speculative article is interesting for experts, but may be best avoided by outsiders until the issues are better understood. Some articles mainly review the state-of-the-art, others provide original research. Some articles have a specific audience: for example the users of a specific dataset or model. Some articles are expected to be more important for scientific progress than others or discuss issues that are more urgent than others. And so on. This information can be communicated to the reader.

The nice thing about the open annotate system is that we can begin reviewing articles before authors start submitting their articles. We can simply review existing articles as well as manuscripts, such as the ones uploaded to ArXiv. The editors could reject articles that should not have been published in the traditional journals and accept manuscripts from archives. I would judge this assessment of a knowledgeable editor (team) more than the acceptance by a traditional journal.

In this way we can produce collections of existing articles. If the new system provides a better reviewing service to science, the authors at some moment can stop submitting their manuscripts to traditional journals and submit them directly to the editors of a collection. Then we have real grassroots scientific journals that serve science.

For colleagues in the communities it would be clear which of these collections have credibility. However, for outsiders we would also need some system that communicates this, which would traditionally be the role of publishing houses and the high barriers to entry. This could be assessed where collections have overlap. Preferably again by humans and not by algorithms. For some articles there may be legitimate reasons why there are differences (hard to assess, other topic of collection), for other articles an editor not having noticed problems may be a sign of bad editorship. This problem is likely not too hard, in a recent analysis of twitter discussions on climate change there was a very clear distinction between science and nonsense.

There is still a lot to do, but with the ease of modern publishing and the open annotate system a lot of software is already there. Larger improvements would be tools for editors to moderate review comments (or at least to collapse less valuable comments); is working on it. A grassroots journal would need a grading system; standardized when possible. More practical tools would include some help in tracking the manuscripts under review and for sending reminders, and the editors of one collection should be able to communicate with each other. The grassroots journal should remain visible even if the editor team stops; that will need collaboration with libraries or science societies.

If we get this working
  • we can say goodbye to frustrated reviewers (well mostly),
  • goodbye to pay-walled journals in which publicly financed research is hidden for the public and many scientists alike and
  • goodbye to wasting limited research money on monopolistic profits by publishing houses, while
  • we can welcoming better review and selection and
  • we are building a system that inherently allows for post-publication peer review.

What do you think?

Related reading

There is now an "arXiv overlay journal", Discrete Analysis. Articles are published/hosted by ArXiv, otherwise traditional peer review. The announcement mentions three software initiative that make starting a digital journal easy: Scholastica, and Open Journal Systems.

Annotating the scholarly web

A coalition to Annotating All Knowledge A new open layer is being created over all knowledge

Brian A. Nosek and Yoav Bar-Anan describe a scientific utopia: Scientific Utopia: I. Opening scientific communication. I hope the ideas in the above post makes this transition possible.

Climate Feedback has started a crowed funding campaign to be able to review more media articles on climate science

Farewell peer reviewing

7 Crazy Realities of Scientific Publishing (The Director's Cut!)

Mapped: The climate change conversation on Twitter

I would trust most scientists to use annotation responsibly, but it can also be used to harass vulnerable voices on the web. Genius Web Annotator vs. One Young Woman With a Blog

Nature Chemistry blog: Post-publication peer review is a reality, so what should the rules be?

Report from the Knowledge Exchange event: Pathways to open scholarship gives an overview of the different initiative to make science more open.

Magnificent BBC Reith lecture: A question of trust

Sunday, 1 May 2016

Christy and McNider: Time Series Construction of Summer Surface Temperatures for Alabama

John Christy and Richard McNider have a new paper in the AMS Journal of Applied Meteorology and Climatology called "Time Series Construction of Summer Surface Temperatures for Alabama, 1883–2014, and Comparisons with Tropospheric Temperature and Climate Model Simulations". Link: Christy and McNider (2016).

This post gives just few quick notes on the methodological aspects of the paper.
1. They select data with a weak climatic temperature trend.
2. They select data with a large cooling bias due to improvements in radiation protection of thermometers.
3. They developed a new homogenization method using an outdated design and did not test it.

Weak climatic trend

Christy and McNider wrote: "This is important because the tropospheric layer represents a region where responses to forcing (i.e., enhanced greenhouse concentrations) should be most easily detected relative to the natural background."

The trend in the troposphere should a few percent stronger than at the surface; mainly in the tropics. However, it is mainly interesting that they see a strong trend as a reason to prefer tropospheric temperatures, because when it comes to the surface they select the period and temperature with the smallest temperature trend: the daily maximum temperatures in summer.

The trend in winter due to global warming should be 1.5 times the trend in summer and the trend in the night time minimum temperatures is stronger than the trend in the day time maximum temperatures, as discussed here. Thus Christy and McNider select the data with the smallest trend for the surface. Using their reasoning for the tropospheric temperatures they should prefer night time winter temperatures.

(And their claim on the tropospheric temperatures is not right because whether a trend can be detected does not only depend on the signal, but also on the noise. The weather noise due to El Nino is much stronger in the troposphere and the instrumental uncertainties are also much larger. Thus the signal to noise ratio is smaller for the tropospheric temperatures, even if the signal were as long as the surface observations.

Furthermore, I am somewhat amused that there are still people interested in the question whether global warming can be detected.)

[UPDATE. Tamino shows that within the USA, Alabama happens to be the region with the least warming. The more so for the maximum temperature. The more so for the summer temperature.]

Cooling bias

Then they used data with a very large cooling bias due to improvements in the protection of the thermometer for (solar and infra-red) radiation. Early thermometers were not protected as well against solar radiation and typically record too high temperatures. Early thermometers also recorded too cool minimum temperatures; the thermometer should not see the cold sky, otherwise it radiates out to it and cools. The warming bias in the maximum temperature is larger than the cooling bias in the minimum temperature, thus the mean temperature still has some bias, but less than the maximum temperature.

Due to this reduction in the radiation error summer temperatures have a stronger cooling bias than winter temperatures.

The warming effect of early measurements on the annual means is probably about 0.2 to 0.3°C. In the maximum temperature is will be a lot higher and in the summer temperature it will again be a lot higher.

That is why most climatologists use the annual means. Homogenization can improve climate data, but it cannot remove all biases. Thus it is good to start with data that has least bias. Much better than starting with a highly biased dataset like Christy and McNider did.

Statistical homogenization removes biases by comparing a candidate station to its neighbour. The stations need to be close enough together so that the regional climate can be assumed to be similar in both stations. The difference between two stations is then weather noise and inhomogeneities (non-climatic changes due to changes in the way temperature was measured).

If you want to be able to see the inhomogeneities you thus need to have well correlated neighbors that have as little weather noise as possible. By using only the maximum temperature, rather than the mean temperature, you increase the weather noise. But using the monthly means in summer, rather than the annual means or at the very least the summer means, you increase the weather noise. By going back in time more than a century you increase the noise because we had less stations to compare with at the time.

They keyed part of the the data themselves mainly for the period before 1900 from the paper records. It sounds as if they performed no quality control of these values (to detect measurement errors). This will also increase the noise.

With such a low signal to noise ratio (inhomogeneities that are small relative to the weather noise in the difference time series), the estimated date of the breaks they still found will have a large uncertainty. It is thus a pity that they purposefully did not use information from station histories (metadata) to get the date of the breaks right.

Homogenization method

They developed their own homogenization method and only tested it on a noise signal with one break in the middle. Real series have multiple breaks; in the USA typically every 15 years. Furthermore also the reference series has breaks.

The method uses the detection equation from the Standard Normal Homogeneity Test (SNHT), but then starts using different significance levels. Furthermore for some reason it does not use the hierarchical splitting of SNHT to deal with multiple breaks, but it detects on a window, in which it is assumed there is only one break. However, if you select the window too long it will contain more than one break and if you select the window too short the method will have no detection power. You would thus theoretically expect the use of a window for detection to perform very badly and this is also what we found in a numerical validation study.

I see no real excuse not to use better homogenization methods (ACMANT, PRODIGE, HOMER, MASH, Craddock). These are build to take into account that also the reference station has breaks and that a series will have multiple breaks; no need for ad-hoc windows.

If you design your own homogenization method, it is good scientific practice to test it first, to study whether it does what you hope it does. There is, for example, the validation dataset of the COST Action HOME. Using that immediately allows you to compare your skill to the other methods. Given the outdated design principles, I am not hopeful the Christy and McNider homogenization method would score above average.


These are my first impressions on the homogenization method used. Unfortunately I do not have the time at the moment to comment on the non-methodological parts of the paper.

If there are no knowledgeable reviewers available in the USA, it would be nice if the AMS would ask European researchers, rather than some old professor who in the 1960s once removed an inhomogeneity from his dataset. Homogenization is a specialization, it is not trivial to make data better and it really would not hurt if the AMS would ask for expertise from Europe when American experts are busy.

Hitler is gone. The EGU general assembly has a session on homogenization, the AGU does not. The EMS has a session on homogenization, the AMS does not. EUMETNET organizes data management workshops, a large part of which is about homogenization; I do not know of an American equivalent. And we naturally have the Budapest seminars on homogenization and quality control. Not Budapest, Georgia, nor Budapest, Missouri, but Budapest, Hungary, Europe.

Related reading

Tamino: Cooling America. Alabama compared to the rest of contiguous USA.

HotWhopper discusses further aspects of this paper and some differences between the paper and the press release. Why nights can warm faster than days - Christy & McNider vs Davy 2016

Early global warming

Statistical homogenisation for dummies

Tuesday, 26 April 2016

Climate scientists are now grading climate journalism

Guest post by Daniel Nethery and Emmanuel Vincent Daniel Nethery is the associate editor and Emmanuel Vincent is the founder of Climate Feedback. Climate Feedback is launching a crowdfunding campaign today.

The internet represents an extraordinary opportunity for democracy. Never before has it been possible for people from all over the world to access the latest information and collectively seek solutions to the challenges which face our planet, and not a moment too soon: the year 2015 was the hottest in human history, and the Great Barrier Reef is suffering the consequences of warming oceans right now.

Yet despite the scientific consensus that global warming is real and primarily due to human activity, studies show that only about half the population in some countries with among the highest CO2 emissions per capita understand that human beings are the driving force of our changing climate. Even fewer people are aware of the scientific consensus on this question. We live in an information age, but the information isn’t getting through. How can this be?

While the internet puts information at our fingertips, it has also allowed misinformation to sow doubt and confusion in the minds of many of those whose opinions and votes will determine the future of the planet. And up to now scientists have been on the back foot in countering the spread of this misinformation and pointing the public to trustworthy sources of information on climate change.

Climate Feedback intends to change that. It brings together a global network of scientists who use a new web-annotation platform to provide feedback on climate change reporting. Their comments, which bring context and insights from the latest research, and point out factual and logical errors where they exist, remain layered over the target article in the public domain. You can read them for yourself, right in your browser. The scientists also provide a score on a five-point scale to let you know whether the article is consistent with the science. For the first time, Climate Feedback allows you to check whether you can trust the latest breaking story on climate change.

An example of Climate Feedback in action. Scientists’ comments and ratings appear as a layer over the article. Text annotated with Hypothesis is highlighted in yellow in the web browser and scientists’ comments appear in a sidebar next to the article. Illustration: Climate Feedback

Last year the scientists looked at some influential content. Take the Pope’s encyclical, for instance. The scientists gave those parts of the encyclical relating to climate science a stamp of approval. Other “feedbacks,” as we call them, have made a lasting impact. When the scientists found that an article in The Telegraph misrepresented recent research by claiming that the world faced an impending ice age, the newspaper issued a public correction and substantially modified the online text.

But there’s more work to be done. Toward the end of the year the scientists carried out a series of evaluations of some of Forbes magazine’s reporting on climate change. The results give an idea of the scale of the problem we’re tackling. Two of the magazine’s most popular articles for 2015, one of which attracted almost one million hits, turned out to be profoundly inaccurate and misleading. Both articles, reviewed by nine and twelve scientists, unanimously received the lowest possible scientific credibility rating. This rarely occurs, and just in case you’re wondering, yes, the scientists do score articles independently: ratings are only revealed once all scientists have completed their review.

We argue that scientists have a moral duty to speak up when they see misinformation masquerading as science. Up to now scientists have however had little choice but to engage in time-consuming op-ed exchanges, which result in one or two high-profile scientists arguing against the views of an individual who may have no commitment to scientific accuracy at all. Climate Feedback takes a different approach. Our collective reviews allow scientists from all over the world to provide feedback in a timely, effective manner. We then publish an accessible synthesis of their responses, and provide feedback to editors so that they can improve the accuracy of their reporting.

We’ve got proof of concept. Now we need to scale up, and for that we need the support of everyone who values accuracy in reporting on one of the most critical challenges facing our planet. Climate Feedback won’t reach its full potential until we start measuring the credibility of news outlets in a systematic way. We want to be in a position to carry out an analysis of any influential internet article on climate change. We want to develop a ‘Scientific Trust Tracker’ – an index of how credible major news sources are when it comes to climate change.

We’re all increasingly relying on the internet to get our news. But the internet has engendered a competitive media environment where in the race to attract the most hits, sensational headlines can trump sober facts. We’re building into the system a new incentive for journalists with integrity to get ahead. Some journalists are already coming to us, asking our network of scientists to look at their work. We want readers to know which sources they can trust. We want editors to think twice before they publish ideological rather than evidence-based reporting on global warming.

On Friday 22 May 2016, more than 170 countries signed the Paris climate agreement. But this unprecedented international treaty will lead to real action only if the leaders of those countries can garner popular support for the measures needed to curb greenhouse gas emissions. The fate of the Paris deal lies largely in the hands of voters in democratic countries, and we cannot expect democracies to produce good policy responses to challenges of climate change if voters have a confused understanding of reality.

Scientists from all over the world are standing up for better informed democracies. You can help them make their voices heard. We invite you to stand with us for a better internet. We invite you to stand with science.

Victor Venema: I am also part of the Climate Feedback community and have annotated several journalistic articles when they made claims about climate data quality. It is a very effective way to combat misinformation. Just click on the text and add a short comment; Climate Feedback will take care of spreading your contribution. If you are a publishing scientist, do consider to join.

* Photo at the top. Severe suburban flooding in New Orleans, USA. Aftermath of Hurricane Katrina. Photo by ark Moran, NOAA Corps, NMAO/AOC (CC BY 2.0)

Tuesday, 29 March 2016

Upcoming meetings on climate data quality

It looks like 2016 will be a year full of interesting conferences. I already pointed you to EGU, IMSC and EMS before. Here is an update with three upcoming European meetings, including two close deadlines.

The marine climate data community will hold its main workshop this July (18 to 22). The deadline has just been prolonged to the 6th of April, Wednesday next week.

The metrologists (no typo) organize a meeting on climate data, MMC2016. It will take place from 26 to 30 September and will be organized together with the WMO TECO conference.

And naturally we will have the European Meteorological Society meeting in Autumn. This year is an ECAC year (European Conference on Applied Climatology). The abstract submission deadline is in about three weeks, 21 Apr 2016, during EGU. So start writing soon. As always we will have a session on "Climate monitoring; data rescue, management, quality and homogenization" for the homogenization addicted readers of this blog.

If you know of more interesting conferences, do add them in the comments.


The Fourth International Workshop on the Advances in the Use of Historical Marine Climate Data (MARCDAT-IV) will be held at the National Oceanography Centre, Southampton, UK between the 18th and 22nd July 2016. The workshop will be arranged around the following themes:
  • Data homogenization (benchmarking, bias adjustments, step change analysis, metadata)
  • Quantification and estimation of uncertainty
  • Data management, recovery and reprocessing (digitisation efforts and reprocessing of previously digitised data)
  • Reconstructing past climates
  • Integrating In-situ / satellite data sources
  • Consistency of the climate across domain boundaries (land, ocean, surface, subsurface, atmosphere)
  • The role of ICOADS and applications of marine climate data
  • Review of the 10-year action plan
This looks like an invitation of people working on land data to also participate. I just asked some colleagues on the homogenization list and it looks like there are a decent number of weather stations near the coast. We could compare them to marine observations, I would especially be interested in comparing sea surface temperature observations.

This workshop is free to attend, but participants must register.

Key Dates:
Abstract submission deadline: 8th April 2016
Registration closes: 31st May 2016


International workshop on Metrology for Meteorology and Climate in conjunction with WMO TECO 2016 conference & Meteorological Technology World Expo 2016. It will be held in Madrid Spain from 26 to 30 September 2016.
During the last years an increasing collaboration has been established between the Metrology and Meteorology communities. EURAMET, the European association of metrology Institutes, is funding several projects aiming at delivering results of valuable impact for the meteorology and climatology science. The key aspect of such projects is the traceability of measurements and uncertainties of measured physical and chemical quantities describing the earth atmosphere. The MMC conference aims to give an opportunity to those two communities to present and discuss needs, methods, expertise and devices for cooperating in producing better data. The invitation is addressed to the metrology, meteorology and climate scientific communities and operators. Starting with the first MMC 2014 in Brdo, Slovenia, this time the MMC 2016 is organized Madrid, Spain, in conjunction with CIMO-TECO conference.
As far as I know the abstract deadline has not been determined yet, but write the date into your agenda, bookmark the homepage and contact the organizers to give you a notice once the deadline is known.


The conference theme of the Annual Meeting of the European Meteorological Society is: Where atmosphere, sea and land meet: bridging between sciences, applications and stakeholders. It will be held from 12 to 16 September 2016 in Trieste, Italy. The abstract submission deadline is 21 April 2016, during EGU2016.

MC1 Climate monitoring; data rescue, management, quality and homogenization
Convener: Manola Brunet-India
Co-Conveners: Hermann Mächel, Victor Venema, Ingeborg Auer, Dan Hollis 
Robust and reliable climatic studies, particularly those assessments dealing with climate variability and change, greatly depend on availability and accessibility to high-quality/high-resolution and long-term instrumental climate data. At present, a restricted availability and accessibility to long-term and high-quality climate records and datasets is still limiting our ability to better understand, detect, predict and respond to climate variability and change at lower spatial scales than global. In addition, the need for providing reliable, opportune and timely climate services deeply relies on the availability and accessibility to high-quality and high-resolution climate data, which also requires further research and innovative applications in the areas of data rescue techniques and procedures, data management systems, climate monitoring, climate time-series quality control and homogenisation.

In this session, we welcome contributions (oral and poster) in the following major topics:
  • Climate monitoring , including early warning systems and improvements in the quality of the observational meteorological networks
  • More efficient transfer of the data rescued into the digital format by means of improving the current state-of-the-art on image enhancement, image segmentation and post-correction techniques, innovating on adaptive Optical Character Recognition and Speech Recognition technologies and their application to transfer data, defining best practices about the operational context for digitisation, improving techniques for inventorying, organising, identifying and validating the data rescued, exploring crowd-sourcing approaches or engaging citizen scientist volunteers, conserving, imaging, inventorying and archiving historical documents containing weather records
  • Climate data and metadata processing, including climate data flow management systems, from improved database models to better data extraction, development of relational metadata databases and data exchange platforms and networks interoperability
  • Innovative, improved and extended climate data quality controls (QC), including both near real-time and time-series QCs: from gross-errors and tolerance checks to temporal and spatial coherence tests, statistical derivation and machine learning of QC rules, and extending tailored QC application to monthly, daily and sub-daily data and to all essential climate variables
  • Improvements to the current state-of-the-art of climate data homogeneity and homogenisation methods, including methods intercomparison and evaluation, along with other topics such as climate time-series inhomogeneities detection and correction techniques/algorithms (either absolute or relative approaches), using parallel measurements to study inhomogeneities and extending approaches to detect/adjust monthly and, especially, daily and sub-daily time-series and to homogenise all essential climate variables
  • Fostering evaluation of the uncertainty budget in reconstructed time-series, including the influence of the various data processes steps, and analytical work and numerical estimates using realistic benchmarking datasets
Next to this session, readers of this blog may also like: Climate change detection, assessment of trends, variability and extremes. And I personally like the session on Spatial Climatology, which has a lot to do with structure and variability.

Top photo by Martin Duggan, which has a CC BY 2.0 license.