Friday, May 19, 2017

New local station trends - comments.

Yesterday I posted a new WebGL map of station trends. I'd like to follow up with comments on two topics, both of which follow from a fix to a problem which added noise, and some bias, to the old version. With the clearer picture, I'd like to point out how the trends really do show a quite smooth consistent picture, mostly, even before adjustment. Then I'd like to talk about the exceptions (USA and China) and the effect of homogenisation.

Then (below the jump) I'll talk more about the effect of removing seasonality,. It is substantial, and, I think, instructive.

First I'll show Europe - unadjusted on left, adjusted on right. All images here are of the thirty year period from 1987 to 2016. It shows a pattern typical of most of the world, with a large degree of uniform warm trend, with a few exceptions. The cool blob on the left, in the N Atlantic, is a shadow of a more prominent cooling in that area in more recent years. The effect of adjustment is not so radical, but it does reduce some of the excursions, some almost fully. It's possible the excursions were real, but given the general uniformity, it seems more likely that they were inhomogeneities.



Next is the USA, with some of Canada in contrast. The density of stations is obvious, as is the inconsistent but strong cooling trend. The issue is TOBS. A lot of stations changed with the conversion to MMTS, and the canges were generally in a direction that created artificial cooling. With adjustment, which includes TOBS correction, the picture is much clearer. Still some cooling in the mid-west, but otherwise warming, as in the rest of N America.



Finally, China. The stations are sparser, but again fairly irregular, although te denser regions are more consistent. And this time homogenisation does not make a cosistent warming or cooling change. It does moderate some of the extreme cooling, so that might have a warming effect overall.



Finally, I would urge readers to check the page in detail, to see the overall effect of adjustment (the swap button helps here). The main thing to see is that adjustment does not have a general effect of increasing trends. It's true that it is hard to distinguish shades of red, but at least warm trends are not being created out of nothing.

Below the jump I'll deal with the seasonal issue.

Thursday, May 18, 2017

WebGL map of local station trends - various periods.

I have updated the page where I show trends over various periods at GHCN land stations and ERSST measures at sea. The old page is here. The map shows trends as a shaded color over the triangular mesh. The shade is exact for the nodes, which you can also query by clicking. Posts on the previous page are here and later here.

The page is not automatically updated, since the trends are at least two decades. However, the previous page was made in 2012, so a data update was needed. And it makes sense to use the new MoyGLV2.1 WebGL facility. I had been slow to update the old data partly because I had used a rather neat, but hard to debug, mesh compression scheme, described here. Each period needs a separate mesh, so that helps. However, downloads are now generally quicker than in 2012, so the full 3 Mb of data does not seem so forbidding. So I have sadly let that go. However, for this post I have put the WebGL below the jump, as it still may take quite a few seconds for some.

I also updated the computing method to correct a source of noise in the previous page. I think the issue is instructive, and in 2012, I hadn't done the thinking explained in some of my many pages on averaging, eg here. I have frequently explained why anomalies are used in spatial averaging, to overcome inhomogeneities. But I had not thought they were needed for a trend at a single station. But they are - seasonal variation is a big source of inhomogeneity, and should be subtracted out. It shows itself in two ways:
  • If missing values cluster in a cold or hot time, especially biased toward one end of the period, then it introduces a spurious trend, and
  • you can even get a spurious trend with all data present. Sin(x) between 0 and 360° has a trend, rising almost the full amplitude. Taking 30 cycles reduces this by a factor of 30, but with seasonal range of say 20C, that can still be serious. Fortunately a calendar year is nore like cos, which doesn't have a trend over that period, but not all data runs a full calendar year at the end.


The remedy is to, for each station, calculate the mean observed seasonal cycle,, and subtract that out. I did that, to good effect. So, below the jump, or on the revised page, you can check ot trends from the last two decades to century plus. The radio buttons let you look at unadjusted or adjusted GHCN (prefixes un_ and ad_). One thing I found useful is to compare (swap button) two trends for the same period, one adjusted, one not. It is clear that homogenisation clears up all kinds of aberration, without greatly affecting the main trend pattern, which except for aberrations is quite smooth in space.

So below the jump is the revised map. There are some operating instructions on the page, or more detail on the WebGL page or post.

Tuesday, May 16, 2017

GISS April down 0.23°C - second warmest April on record.

I have been noting records showing a large drop from the very warm levels of March. NCEP/NCAR was down 0.23°C, TempLS down by0.165°C (now 0.16). GISS was also down 0.23°C, from 1.11C in March to 0.88 in April. But that is still warmer than any previous April except 2016. And it is warmer than the annual average for 2015 (0.82C), itself a notable record in its time. Sou has more. The April temperature is back to that of January, after the peaks of Feb and March.

The NCEP/NCAR dsaily record showed what happened. There was a sharp descent through the month, seeming to bottom out at the end. May has recovered somewhat, but is likely to also be much cooler than March, and is so far behind the April average..

I showed last month the year-to-date plot, compared with other warm years, noting that the year so far was ahead of the 2016 average, as shown by the red curve and horizontal line. Now YTD 2017 is right on the 2016 average. May will probably bring it below. Record prospects for 2017 now depend a lot on renewed El Nino activity. Here is the current YTD plot:



As usual, I will compare the GISS and previous TempLS plots below the jump. As with TempLS, there were fewer big features - lingering warmth in Siberia/Arctic, some cold in Antarctic.

Wednesday, May 10, 2017

Global surface anomaly down 0.165°C in April.

I've been waiting for three days for China to report - most others are very punctual lately. So it could change a little. But enough is enough - and last month, when I waited for China, they sent in February data, so it would have been better not to wait. Anyway, TempLS mesh showed a drop from 0.894°C in March to 0.729°C in April. That compares to a larger 0.226°C drop in the reanalysis index. Meanwhile, the troposphere indices went up - 0.08°C for UAH V6. As I often seem to have to say, it is a different place.

Despite the drop, April was still very warm. It was the 16th warmest month of any kind in the TempLS record. It was warmer than any annual average before 2016, including the then record year of 2015.

There was still quite a lot of warmth in the Siberia/Arctic region, and also in the east US. Antarctica was cold. Here is the breakdown plot:



Probably the main point of future interest is that SST is quite a lot higher. Elsewhere mostly moderate, which is a reduction for Siberia and Arctic.



Monday, May 8, 2017

The WebGL facility - versions.

Clive Best has been making good use of the WebGL facility. So I thought I should be more formal about versioning. I have been calling the current V2 a beta; I'll now drop the beta, and stop tinkering with V2, apart from bug fixes. The next version will be 2.1. I'll include that in the URL, and keep old versions posted, so for existing apps you won't be affected by changes, unless you call the update URL.

The main change I made (today) to V2 was to the dragging. There hadn't been any external control on update frequency, and so dragging a globe with a lot of triangles or lines could lead to superposition of successive images, with messy results. I have put in a 20 millisecs delay, so it can only update 50 times per sec. That delay doesn't seem to be perceptible, and mostly fixes that problem. You can vary this; the default is
U.delay=20.

The other main change is that there is now an option in the user file to define an additional function called MoyLate(p,U). This has the same syntax and functionality as MoyDat, but it is implemented after the extra objects like line (_L) edges. You can assign them properties at this stage; it wasn't possible in MoyDat(). You can't define new objects here, and it isn't the place to vary objects defined in MoyDat(). You can set colors, or maybe more usefully, vary the show property, eg
p.Mesh_L.show=0
That means that initially the line edges won't show, and the checkbox will be there but blank.

Another change is that in the calling HTML, you still need to provide a DIV tag before the script calls, but it doesn't need an ID. If you don't provide a DIV, it will go looking for somewhere to hang the app. In principle, this means that you can have several apps running on the same page (without iframes), but I think that needs more work.



Friday, May 5, 2017

Nature paper on the "hiatus".

There is a new Nature paper getting discussed in various places. It is called Reconciling controversies about the 'global warming hiatus'. There is a detailed discussion in the LA Times. The Guardian chimes in. I got involved through a WUWT post on a GWPF paper. They seem to find support in it, but other skeptics seem to think the reconciliation was effective, and are looking for the catch.

I thought it was a surprisingly political article for Nature, in that it traces how the hiatus gained prominence through pressure from contrarians and right wing politics, and scientists gradually came to take it seriously. I think they are right, but the process should be resisted. There really isn't much there, and the fact that contrarians create a hullabaloo doesn't mean that it is worth serious study. I'll show why I think that.

I'm going to show plots of various data since 2001, which is the period quoted (eg by GWPF) which excludes the 1998 El Nino. They weren't so scrupulous about that in the past, but now they want to exclude the recent warm years. Typically "hiatus" periods end about 2013. I recommend using the temperature trend viewer to see this in perspective. The most hiatus-prone of the surface datasets, by far is HADCRUT (Cowtan and Way explain why). Here is the Viewer picture of HADCRUT 4 trends in the period:



Each dot respresents a trend period, between the start year on the y-axis and the end on the x-axis. It's a lot easier to figure out in the viewer, which has an active time series graph which will show you when you click what is represented. If you cherry-pick well, you can find a 13-year period with zero slope, shown by the brown contour. And you'll see that the hiatus periods form two descending columns, headed by a blue blob. These are the periods which end in a fixed year (approx) on the x-axis - ie a dip. There are just two of them, and they are the La Nina years of 2008/9 and 2011/2. The location of those events determines the hiatus. If you look at other sets on the trend viewer, you'll see this much more weakly. At WUWT I listed the 2001-13 trends thus (error range converted to ±1σ):

DatasetTrend °C/cen
HADCRUT0.063 ± 0.301
GISS0.506 ± 0.367
NOAA0.509 ± 0.326
BEST L/O0.468 ± 0.432
C&Way0.489 ± 0.391


All except HADCRUT are quite positive. People sometimes speak of a slowdown. Incidentally, in the triangle plot, there is a reddish horizontal bar, bottom left, that is almost as prominent as the "pause". They are the strong positive trends that you can draw starting in 1999 - ie the 2001-6 warmth seen from the other end. I don't remember anyone getting excited about this feature.

I'd like to talk about the arithmetic of trends. Trend is a first central moment. It has a lot in common with moments of force, or torque. I think of it as a see-saw - a classic torque device. A heavyweight on the end has a lot of effect; in the middle not much. And of course, it depends which end. Trend is an odd see-saw, because it has both weights (cold periods) and uplifts (warm). It also has a progression. Items come on one end, and then progress across, exerting less and then opposite torque, until they drop off the other end (if you keep period fixed). So there isn't actually a lot of the period that is etermining the trend. It is predominantly the end forces.

I'll ilustrate that with this set of graphs (click the buttons below to see various datasets). It shows the mean (green) for 2001-2013 and colors the data (12-month running mean) as deviation from that value. The idea is that there has to be as much or more pulling the trend down rather than up, if it is to be negative. Either blue at the right or red at the left.



Now you can see that there aren't a lot of events that determine that. There is a red block from about 2001-6, which pulls the trend down. Then there are the two blue regions, the La Nina of 2008/9 and 2011/12, which also pull it down. The LN of 2008 has small torque on this period, but would have been effective earlier. @012 has the leverage, and so overcomes the sole uplift period or 2010.

That is just four periods, and it isn't hard to see how their effects can be chancy. It's really the 2001/6 warmth that is the anchor.

And then you see the big red period at the end, which overwhelms all this earlier stuff. GWPF and Co are keen to say that this is just a special case that should be excluded. Something like that it wasn't caused by CO2. But the 2001-6 period is also jus a natural excursion, and wasn't caused by CO2 either.

Basically the pause from 2001 won't come back until that big red is countered by a big blue. That would ensure that the trend returns close to that green line (extended). Of course, the red will be a powerful pauser for trends starting in 2015, and we'll hear about that soon enough.

Here is the same data colored by deviation from the trend from 2001 to present. We're still well on the red side of that too. The point here is that as long as new data lands above that line, it will be more red, and the trend will go up. It won't even reverse direction until you start seeing blue at that end. And if it did, there is a long way to go.



Now that the line has shifted, you can see how the blue periods would have destroyed such a trend earlier. But now, with their reduced leverage and the size of the red, that is where the trend ends up. For Hadcrut it's now 1.4°/Cen (other surface indices are higher).

So my conclusion is that, just as contrarians protest (with some justice) that not too much should be mad eof the current strong warming trends, because they are influenced by a single event, so too should the much waker hiatus be observed with modest interest, because it is the result of the concurrence of two weaker events, La Nina's, which get less noticed because they are less porminent, but are equally rather chance occurrences.







Thursday, May 4, 2017

Land masks, mesh and global temperature

I have been writing articles about land masks, leading up to using them to check and maybe improve my triangular mesh based TempLS. As I have tried to emphasise, the core of estimating global average temperature anomaly (or average anything) is numerical spatial integration. The temperature is known at a finite number of points. It has to be inferred for all the rest (interpolation) and the result complete data integrated. To do this successfully, the data has to be fairly homoigeneous, so anomalies are formed to take out variance in long term mean values. Then in the triangle method, linear interpolation is done within triangles.

But another kind of inhomogeneity is between land and sea, and indices often use a land mask to try to pin that down. In the mesh context, and in general, the idea is to ensure that values on land are only interpolated from land data; sea likewise.

The method corresponding to what is done with grids would be to count the mask elements within each triangle, and to divide coast-crossing triangle into a land and sea part. Since all that matters in the end is the weighting of each node, it's only necessary to get the area right. Assigning maybe a million grid elements to triangles is a rather heavy computation. So I tried something more flexible.

Here is a snapshot from the WebGL graphic below. It shows a problem section in East Africa. Light blue triangles are those that have two sea nodes, one land, and orange are those with two land, one sea. The Horn of Africa is counted as sea, and there is a good deal of encroachment of sea on land. That is about as bad as it gets, and of course there is some cancelling where land encroaches on sea.


So I refine the mesh. On the longest 20% of lines in such triangles, with land at one end and sea at the other, I make an extra node, and test whether it is sea/land with the mask. Then I give it the value of its matching end type. With the new nodes, I then re-mesh. This process I repeat several times. After respectively four and seven steps I get:

As you see, the situation improves greatly. new nodes cluster around the coast. There is however, still two rather large triangles at sea with a land node. These can show up when everything else seems converged; it is because of the convex hull re-meshing which may make different decisions about some of the large triangles bordering the coast. It slows convergence.

As to placement of that new node on the line, that is where the mask with a metric comes in. I know the approx distance of each node from the coast, and can place the new node where I estimate the cost to cross. I don't want it to be too exact, just to minimise the interior nodes created.


What I really want to know is what this does to the integral. So I tried first integrating the mask itself. That is a severe test; the result should show land/sea proportions as in a count of the mask. Then I tried integrating anomalies for February 2017. I'll show those below, but first, here is the WebGL showing of the seven stages of refinement (radio buttons).

Integration results

The table below shows the results of the progression. The left column is the area of the mixed triangles (part land, part sea), as a proportion of total surface. The next shows the result of integrating the mask itself, which should converge to 0.314. The third are the successive integrals of the anomalies for February 2017.

Mixed area
(fraction of sphere)
Integral of mask Integral of anomaly
°C, Feb 2017
Step 00.17660.31180.8728
Step 10.12680.32280.8583
Step 20.10970.31920.8645
Step 30.08450.32050.8655
Step 40.06820.32120.8646
Step 50.05780.32030.8663
Step 60.04890.32080.8624
Step 70.04290.31990.8611

Conclusion

I think it was a coincidence that the mask integration turned out near its target value of 0.314 at step 0 (no mesh change). As I said above, this is the most demanding case, maximising inhomogeneity. It doesn't improve because of the occasional flipping of triangles which leads to the occasional exceptions that show in the WebGL, but also because it started so close. For anomalies, the difference it makes to February 2017 is small at around 0.01°C.

So, while I am glad to have checked on the coast issue, I don't think it is worth incorporating this method in TempLS. It means extra convex hull calculation for each month, which is slow.







Wednesday, May 3, 2017

ERSST and Sea Ice

I use the NOAA ERSST V4 SST (Sea surface temperature) dataset as part of TempLS. It has the virtue of coming out promptly at the start of the month, and of course is the product of a lot of scientific work. But it has two nuisance aspects. One that I described last month, is that its 2x2° cells don't align very well with the coastal boundaries, and some repair action is needed. The other is the treatment of sea ice. ERSST returns values (if it can) for all non-land regions, and where there is sea ice, returns -1.8°C, which is the melting point of ice in sea, and so is indeed presumably the temperature of the water. But it isn't much use as a climate proxy there. Polar air over ice is often very much colder.

My aim is to mark these regions as no result, so that they will be interpolated, mostly from land. But that is complicated because, while -1.8 is clear enough, there are often temperatures close to that, which presumably mean mostly ice, or maybe ice for part of the month. So I have used a cut-off of -1°C.

I have been working recently with land masks to improve the accuracy of TempLS near coasts. My preferred version uses a triangular mesh with nodes at measurement points, so triangles will often be part land, part sea. It would be desirable to ensure that the implied interpolation uses land values for land locations. I'll post soon on how this can be done. But it sharpens the problem of sea ice, because the land mask doesn't recognise it. So I need to use some data, and ERSST is to hand, to mark this as land rather than sea.

So I have been reviewing the criterion for making that determination. I actually still think that -1°C is reasonable. To see that, I mapped the ERSST grid for Jan-Mar 2017 to show where the in-between regions are. I used WebGL.

It might seem that WebGL is overkill, since the polar regions can be easily projected onto 2D. But the WebGL facility makes it the easiest way. I just set all positive temperatures to zero, use the GRID type so I don't have to work out triangles, and then the color mapping automatically devotes the color range to the region of interest (and makes a color key).

So here is the plot (drag to see poles); in those months (radio buttons) it is Arctic that is of most interest. You can see that most of the region expected to be sea ice is in fact at -1.8C, and the fringe regions are intermediate. But there are also regions around the Canadian islands, for example, which show up as higher than -1.8, but would be expected to be frozen. A level of -1 seems to capture all that, without unduly modifying the front to clear ocean.

April NCEP/NCAR down 0.226°C

Temperatures rose from January to March but dropped right back in April, from March's 0.566°C to 0.34°C. That makes it the coldest month since the 2016 El Nino, behind December's 0.391°C. But even so, it was warmer than the annual averages of both 2014 and 2015, each a record in its time.

The main cool places were Canada, N Europe and Antarctica. China, E Siberia and the Arctic Ocean were warms was even most of the US.

Update - slightly OT, but you may notice that the TempLS report for March has a strange number (0.653°C) for that month. The main table above it has the correct number. The reason seems to be that in GHCN in the last few days, a whole lot of March data has gone missing, as you can see in the station map of the report. I hope they fix it soon. Fortunately, the lack of data prevents it updating the main table.

Update 2 - I wrote to GHCN but no response so far. Meanwhile, the pattern has changed - no longer whole countries missing, but more stations overall, so that now TempLS won't report at all.

Update 3. I got a response from GHCN saying that it was an ingest problem, now fixed. And it does seem OK now.



Friday, April 28, 2017

Land masks with distance measure

I wrote earlier about my use of land masks to sharpen up the boundaries of the ERSST data set that I use (and stop SST grid centres turning up on land). I have a more ambitious use in improving the weighting of the TempLS triangular mesh for land/sea difference. At present, many elements have mixed land/sea, and it is largely left to chance to get he balance right. I think that usually works out, but it would be better to have control.

A land mask is a big matrix of 1's and 0's corresponding to a grid, usually lat/lon. It has 1 if the cell is in land; it may also have a % where there is doubt, or may have a binary choice. There are a lot of land masks around, down to kilometer resolution if you want, but common ones are 1°, 1.2 and 1/4. That is what I will use (as used in the ISLSCP 2 project).

My general scheme is to refine the mesh to reduce the area of those spanning triangles. New nodes don't have new data attached, but their weight will be attributed to a land or sea station according to their placement.

I found that I would really like a more advanced mask, that actually gave a measure of the distance to the coast (for land and sea). It doesn't really increase the size of the mask. And it means that when I want to create a new node, I can place it toward the coast, instead of waiting for successive node generation to locate it. My scheme without this worked well for a while, but would create situations where new nodes would force a shift in some triangle that had all nodes on land. This happens because each mesh update is by convex hull formation, and with new nodes such a triangle might lose its tangent status.

So I set about making such a mask. I use a diffusion scheme. I mark the cells where land and sea adjoin, scored zero. Then next step I mark every neighbor cell on the land side +1, and on sea, -1. Then I mark their neighbors +2, -2, and so on.

But there is the problem of lakes. Masks generally show a lot of them, and I don't really want to know the distance to the nearest lake. So I first remove them. I do this by diffusion too. At this stage, I have the original 0,1 mask. I first advance the land by marking each of the 1 cells with a 1, and then again. That fills in most lakes, but also a lot of sea, especially bays etc. So then I diffuse back, advancing the 0's. This won't help the inland lakes, but will restore the sea cells to 0. Then I use the original mask to restore all land to 1 status.

I'll show below how this all works. It has enabled the overall aim, a coast-hugging triangular mesh, which I'll show in my next post. I have put the results as a R data file here. It is a list "mask"; the components are a letter (q for original, a for lake-less, and n with dist to sea), and 1,2,4 for cells per degree.

Wednesday, April 26, 2017

GWPF International Temperature Data Review - second anniversary

I've been intermittently tracking the progress of this review, which seems to have zombie status. The web site is still there, with no sign of news or termination. The project itself was announced here, with banner headlines in the Telegraph ( "Top Scientists Start To Examine Fiddled Global Warming Figures" )  and echoes. I described the state of play in September 2015.

I posted on the previous anniversary. I thought it necessary to maintain a watch, because they had said that despite not proceeding to a report, papers would be written, including one on the submissions. Publication of those would be held back until then. But Sept 2015 was the last news posting, and I have not heard of any progress with papers.

This is probably my last post on the topic - I think we have to deem it totally dead, despite the GWPF website still promising progress.



Sunday, April 23, 2017

Land Masks and ERSST

I use ERSST V4 as the ocean temperature data for TempLS. The actual form of he data is sometimes inconvenient; it probably wasn't intended for my kind of use. I described how it fits in here. My main complaint there was that it sets SST under sea ice to -1.8°C, which is obviously not useful as an air proxy. They obviously can't produce a good proxy, but it would be better to have the area explicitly masked, as you can't tell when the temperature is below about 1° whether it is really so, or whether there was part of the month that was frozen over, pulling down the average.

I described last month a new process I use to get a more evenly distributed subset of the ERSST for processing. The native density of 2x2° is unbalanced relative to land, and biases the temperature toward marine. The new scheme works well, but it draws attention to another issue. ERSST seems to quote a temperature for any cell for which they have a reading, even if the cell is mostly land. And in the new scheme, cell centers can more easily be on land. In particular, one turned up in the English Midlands, just near where I was once told is the point at maximum distance from the sea.

I've been thinking more about land masking lately. I have from a long while ago a set of masks that were used in the ISLSCP 2 project. They come in 1, 1/2 and 1/4° resolution, and in one version have percentages marked. I used the percent version to get land % for the 2° grid, and compared with what ERSST reported. Here is a WebGL version of that:



The ERSST filled cells are marked in pink; the land mask in lilac. The cells in green are both in ERSST and the land mask; white cells are in neither. You can switch the checkboxes top right to look at just ERSST, just mask, or just the green if you want. I called the green OVER, because it seems to mainly show sea intruding on land.

There is a tendency for the green to appear on west coasts, which suggests that the ERSST might be misaligned. One annoying thing about ERSST is that they aren't explicit about wherther the coordinates given for a cell represent the center or a corner. I've assumed center. If you moved ERSST one degree west, the green would then appear, a little more profusely, on the East coasts. I used 60% sea as the cut-off for the lnd mask. This was a result of trial; 50% meant that the land mask tended tp fall short of the coast more than overshoot; 60% seemed to be the balance point. Either is pretty good.

So my remedy has been to remove the green cells from the ERSST data. That seems to fix the problem. It raises anomalies very slightly, because it upweights land, but March rose from 0.89 to just 0.894, with similar rises in earlier months. The area involved is small.

I am now looking at ways to landmask the triangular mesh.



Friday, April 21, 2017

Spherical Harmonics - the movie

This is in a way a follow-up to the Easter Egg post. There I was showing the icosahedral based mesh with various flashing colors, with a background of transitions between spehrical harmonics (SH) to make an evolution. Taking away the visual effects and improving the resolution makes it, IMO, a good way of showing the whole family of spherical harmonics. I described those and how to calculate them here, with a visualisation as radial surfaces here.

Just reviewing - the SH are the analogue of trig functions in 1D Fourier analysis. They are orthogonal with respect to integration on the sirface, and as with 1D Fourier, you can project any function onto a subspace spanned by a finite set of them - that is, a least squares fit. The fit has various uses. I use one regularly in my presentation of TempLS results, and each month I show how it compares with the later GISS plot (well). I also use it as an integration method; all but the first SH's exactly integrate to zero, so with a projection onto SH space, the first coefficient gives the integral. I think it is nearly as good as the triangle mesh integration.

As with trig functions, the orthogonality occurs because they have oscillations that can't be brought into phase, but cancel. That is the main point of the pattern that I will show. There are two integer parameters, L and M, with 0≤M≤L. Broadly, L represents the total number of oscillations, some in latitude and some around the longitude, and M represents how they are divided. With M=0, the SH is a function of latitude only, and with M=L, of longitude only (in fact, a trig function sin(M*φ)). Otherwise there is an array of peaks and dips.

Sunday, April 16, 2017

A Magical Easter Egg

This is a Very Serious Post. Really. It's a follow-up to my previous post about icosahedral tesselation of the sphere (Earth). The idea is to divide the Earth as best possible into equal equilateral triangles. It's an extension of the cubed sphere that I use for gridding in TempLS. The next step is to subdivide the 20 equilateral triangles from the icosahedron in triangles and project that onto the sphere. This creates some distortion near the vertices, but less than for the cube.

So I did it. But not having an immediate scientific use for it, and having some time at Easter, I started playing with some WebGL tricks. So here is the mesh (each triangle divided into 49) with some color features, including some spherical harmonics counterpoint.

Naturally, you can move it around, and there are some controls. Step is the amount of color change per step, speed is frame speed, and drift is the speed of evolution of the pattern. It's using a hacked version of the WebGL facility. Here it is. Happy Easter.

Saturday, April 15, 2017

GISS March up by 0.02°C, now 1.12°C!

As Olof noted, GISS has posted on March temperature. It was 1.12°C, up by 0.02°C from February. That rise is close to the 0.03°C shown by TempLS mesh. It makes March also a very warm month indeed. It's the second warmest March in the record - Mar 2016 was near the peak of the El Nino. And it exceeds any month before 2016.

Here is the cumulative average plot for recent warm years. Although 2016 was much warmer at the start, the average for 2017 so far is 0.06°C higher than for all 2016.





I'll show the globe plot below the jump. It shows the huge warmth in Siberia, and most of N America except NW. And also Australia - yes, it has been a very warm autumn here so far (mostly). GISS escaped the China glitch.


Thursday, April 13, 2017

TempLS update - now March was warmer than Feb by 0.03°C

Commenter Olof R noticed that the TempLS mesh estimate for March had suddenly risen, reversing the previously reported drop of about 0.06°C to a rise of 0.03°C. He attributed the rise to a change in China data, which, as noted in the previous post had been very cold, and was now neutral.

I suspected that the original data supplied by China might have been for February, a relatively common occurrence. Unfortunately when I download GHCN data it overwrites the previous, so I can't check directly. But the GHCN MAX and MIN data are updated at source less frequently than TAVG, and they are currently as of 8 April. So I checked the China data there, and yes, March was very similar to February, though not identical. GHCN does a check for exact repetition.

Then I checked the CLIMAT forms at OGIMET. I checked the first location, HAILAR (way up in Manchuria). The current CLIMAT has a TMAX of -3°C for March and -13.5°C for Feb, and yes, the 8 Apr GHCN has -13.5. So it seems that is what happened, and has been corrected.

So March is warmer than February, and so warmer than any month before Oct 2015. It is also warmer than the record annual average of 2016, and so then is the average for Q1 of 2017. The result is fairly consistent with the NCEP/NCAR average, which showed a very slight fall. I was preparing a progress plot for the next GISS report, so I'll show that for TempLS. It shows the cumulative average for each year, and the annual average as a straight line. 2017 has not started with the El Nino rush of 2016, but is ahead of the average and seems more likely to increase than decrease.





Icosahedral Earth

This post is basically an exercise in using the WebGL facility, with colorful results. It's also the start of some new methods, hopefully. I wrote a while ago about improved gridding methods for integrating surface temperatures. The improvement was basically a scheme for estimating missing cells based on neighbors, and an important enabling feature was a grid that had more uniform cells than the conventional lat/lon grid. I used a cubed sphere - a projection of a gridded cube surface onto the sphere. The corners of the cube are a slight irregularity, that can be mitigated by non-linear scaling of the grid spacing. The cubed sphere has become popular lately - GFDL use it for their GCMs. It worked well for me.

In that earlier post, Victor Venema suggested using an icosahedron instead. This has less irregularity at the vertices, since the solid angle is greater, and the distortion of mapping to a sphere less. The geometry is a bit less familiar than the cube, but quite manageable.

A few days ago, I described methods now built into the facility for mapping triangles that occur in convex hull meshing actually onto the spherical surface. This is basically what is needed to make a finer icosahedral mesh. In this post, I'll use that as provided, but won't do the subdivision - that is for another post.

I also wanted to try another capability. The basic requirement of the facility is that you supply a set of nodes, nodal values (for shading), and links which are a set of pointers to the nodes and declare triangles, line segments etc. From that comes continuous shading, which is usually what is wanted. But WebGL does triangles individually, and you can color them independently. You just regard the nodes of reach triangle as being coincident with others, but having independent values. For the WebGL facility, that means that for each triangle you give a separate copy of the nodal coordinates and a separate corresponding value, and the links point to the appropriate version of the node.

So I thought I should try that in practice, and yes, it works. The colors look better if you switch off the map - checkbox top right. So here is the icosahedral globe, with rather random colors for shading:

Friday, April 7, 2017

March global surface temperature down 0.066C.

Update There was a major revision to GHCN China data, and now March was 0.03°C warmer than February. See update post

TempLS mesh declined in March, from 0.861°C to 0.795°C. This follows the very small drop of 0.01°C in the NCEP/NCAR index, and larger falls in the satellite indices. The March temperature was still warm, however. It was higher than January (just) and higher than any month before October 2015. And the mean for the first quarter at 0.813°C is just above the record high annual mean of 0.809°C, though it could easily drop below (or rise further) with late data. So far all the major countries seem to have reported. With that high Q1 mean, a record high in 2017 is certainly possible.

TempLS grid also fell by a little more by 0.11°C. The big feature this month was the huge warmth over Siberia. It was cold in Canada/Alaska (but warm in ConUS) and cold in China. Here is the map:



The breakdown plot is remarkable enough that I'll show that too here (it's always on the regular report). On land almost all the positive contribution came from Siberia and Arctic - without that, it would have been quite a steep fall. SST has been slowly rising since December, which is another suggestion of a record year possibility.





Incidentally I'm now using the finer and more regular SST mesh I described here. The effect on results is generally small, of order 0.01-02°C either way, which is similar to the amount of drift seen in late data coming in. You may notice small differences in comparing old and new. You'll notice quite a big change in the number of stations reporting, which is due to the greater number of SST. I've set a new minimum for display at 5300 stations.



Wednesday, April 5, 2017

Global 60 Stations and coverage uncertainty

In the early days of this blog, I took up a challenge of the time, and calculated a global average temperature using just 60 land stations. The stations could be selected for long records, rural etc. It has been a post that people frequently come back to. I am a little embarrassed now, because I used the plain grid version of the TEmpLS of the day, and so it really didn't do area weighting properly at all. Still, it gave a pretty good result.

Technology and TempLS_ has advanced, I next tried using triangular mesh with proper Voronoi cells (I wouldn't bother now). I couldn't display it very well, but the results were arguably better.

Then, about 3 years ago, I was finally able to display the results with WebGL That was mainly a graphic post. Now I'd like to show some more WebGL graphics, but I think the more interesting part may be tracking the coverage uncertainty, which of course grows. I have described here and earlier some ways of estimating coverage uncertainty, different from the usual ways involving reanalysis. This is another way which I think is quite informative.

I start with a standard meshed result for a particular month (Jan 2014), which had 4758 nodes, about half SST. I get the area weights as used in TempLS mesh. This assigns weight to each nodes according to the area of the triangles it is part of. Then I start culling, removing the lowest weights first. My culling aims to remove 10% of nodes with each step, getting down to 60 nodes after about 40 steps. But I introduce a random element by setting a weight cut at about 12.5%, and then selecting 4/5 of those at random. After culling, I re-mesh, so the weights of many nodes change. The rather small randomness in node selection has a big effect on randomising the mesh process.

And so I proceed, calculating the new average temperature at each step from the existing anomalies. I don't do a re-fitting of temperature; this is just an integration of an existing field. I do this 100 times, so I can get an idea of the variability of temperature as culling proceeds.

Then, as a variant, I select for culling with a combination of area and a penalty for SST. The idea is to gradually remove all ocean values, and end up with just 60 land stations to represent the Earth.

Monday, April 3, 2017

NCEP/NCAR global surface temperature down 0.01°C in March

The NCEP/NCAR anomaly for March was 0.566°C, almost the same as Feb 0.576°C. And that is very warm. It makes the average for the first quarter 0.543°C, compared with the 2016 annual average of 0.531°C. In most indices, 2016 was the warmest ever, so with a prospect of El Nino activity later in the year, 2017 could well be the fourth record year in a row.

You can bring up the map for the month here. It was warm in Europe, mixed in N America, warm in Siberia but cool further South, and varied at the poles. So GISS may come down a bit, since it has been buoyed by the Arctic warmth.





Friday, March 31, 2017

Moyhu WebGL interactive graphics facility, documented.

I wrote a post earlier this month updating a general facility for using WebGL for making interactive Earth plots, Google-Earth style. I have now created a web page here which I hope to maintain which documents it. The page is listed near the bottom of the list at top right. I expect to be using the facility a lot in future posts. It has new features since the last post, but since I don't think anyone else has used that yet, I'll still call the new version V2. It should be compatible with the earlier.

Tuesday, March 28, 2017

More ructions in Trump's EPA squad.

As a follow-up to my previous post on the storming out of David Schnare, there is a new article in Politico suggesting that more red guards are unhappy with their appointed one. It seems the "endangerment finding" is less endangered than we thought.
But Pruitt, with the backing of several White House aides, argued in closed-door meetings that the legal hurdles to overturning the finding were massive, and the administration would be setting itself up for a lengthy court battle.

A cadre of conservative climate skeptics are fuming about the decision — expressing their concern to Trump administration officials and arguing Pruitt is setting himself up to run for governor or the Senate. They hope the White House, perhaps senior adviser Stephen Bannon, will intervene and encourage the president to overturn the endangerment finding.

Monday, March 27, 2017

Interesting EPA snippet.

From Politico:
Revitalizing the beleaguered coal industry and loosening restrictions on emissions was a cornerstone of Trump’s pitch to blue collar voters. Yet, two months into his presidency, Trump loyalists are accusing EPA Administrator Scott Pruitt of moving too slowly to push the president’s priorities.

Earlier this month, David Schnare, a Trump appointee who worked on the transition team, abruptly quit. According to two people familiar with the matter, among Schnare’s complaints was that Pruitt had yet to overturn the EPA’s endangerment finding, which empowers the agency to regulate greenhouse gas emissions as a public health threat.

Schnare’s departure was described as stormy, and those who’ve spoken with him say his anger at Pruitt runs deep.

"The backstory to my resignation is extremely complex,” he told E&E News, an energy industry trade publication. “I will be writing about it myself. It is a story not about me, but about a much more interesting set of events involving misuse of federal funds, failure to honor oaths of office, and a lack of loyalty to the president."

Other Trump loyalists at EPA complain they’ve been shut out of meetings with higher-ups and are convinced that Pruitt is pursuing his own agenda instead of the president’s. Some suspect that he is trying to position himself for an eventual Senate campaign. (EPA spokespersons did not respond to requests for comment.)
David Schnare, a former EPA lawyer, has been most notable for his unsuccessful lawsuits (often with Christopher Horner) seeking emails of Michael Mann and others. Here he is celebrating at WUWT his appointment to the Trump transition team.

Update Here is the story at Schnare's home base at E&E.

Update - as William points out below, I had my E&Es mixed up. Here is Schnare at his E&E announcing his appointment. But they have not announced his departure.


Wednesday, March 22, 2017

Global average, integration and webgl.

Another post empowered by the new WebGL system. I've made some additions to it which I'll describe below.

I have written a lot about averaging global temperatures. Sometimes I write as a sampling problem, and sometimes from the point of view of integration.

A brief recap - averaging global temperature at a point in time requires estimating temperatures everywhere based on a sample (what has been measured). You have to estimate everywhere, even if data is sparse. If you try to omit that region, you'll either end up with a worse estimate, or you'll have to specify the subset of the world to which your average applies.

The actual averaging is done by numerical integration, which generally divides the world into sub-regions and estimates those based on local information. The global result always amounts to a weighted average of the station readings for that period (month). It isn't always expressed so, but I find it useful to formulate it so, both conceptually and practically. The weights should represent area.

In TempLS I have used four different methods. In this post I'll display with WebGL, for one month, the weights that each uses. The idea is to see how well each does represent area, and how well they agree with each other. I have added some capabilities to the WebGL system, which I will describe.

I should emphasise that the averaging process is statistical. Errors tend to cancel out, both within the spatial average and when combining averages over time, when calculating trends or just drawing meaningful graphs. So there is no need to focus on local errors as such; the important thing is whether a bias might accumulate. Accurate integration is the best defence against bias.

The methods I have used are:
  • Grid cell averaging (eg 5x5 deg). This is where everyone starts. Each cell is estimated as an average of the datapoints within it, and weighted by cell area. The problem is cells that have no data. My TempLS grid method follows HADCRUT in simply leaving these out. The problem is that the remaining areas are effectively infilled with the average of the points measured, which is often inappropriate. I continue to use it because it has often very closely tracked NOAA and HADCRUT. But the problem with empty cells is serious, and is what Cowtan and Way sought to repair.
  • My preferred method now is based on irregular triangulation, and standard finite element integration. Each triangle is estimated by the average of its nodes. There are no empty areas.
  • I have also sought to repair the grid method by estimating the empty cells based on neighboring cells. This can get a bit complicated, but works well.
  • An effective and elegant method is based on spherical harmonics. The nodes are fitted with a set of harmonics, based on least squares regression. Then in integrating this approximation, all except the first go to zero. The integral is just the coefficient of the constant.


The methods are compared numerically in this post. Here I will just display the weights for comparison in WebGL.

Friday, March 17, 2017

Temperature residuals and coverage uncertainty.

A few days ago I posted an extensive ANOVA-type analysis of the successive reduction of variance as the spatial behaviour of global temperatures was more finely modelled. This is basically a follow-up to show how the temperature field can be partitioned into a smooth part with known reliable interpolation, and a hopefully small residue. Then the size of the residue puts a limit on the coverage uncertainty.

I wrote about coverage uncertainty in January. It's the uncertainty about what would happen if one could measure in different places, and is the main source of uncertainty in the monthly global indices. A different and useful way of seeing it is as the uncertainty that comes with interpolation. Sometimes you see sceptic articles decrying interpolation as "making up data". But it is the complement of sampling, which is how we measure. You can only measure anything at a finite number of places. You infer what happens elsewhere by interpolation; that can't be avoided. Just about everything we know about the physical world, or economic for that matter, is deduced from a finite number of samples.

The standard way of estimating coverage uncertainty was used by Brohan et al 2006. They took a global reanalysis and sampled at sets of places correponding to possible station distributions. The variability of the resulting averages was the uncertainty estimate. The weakness is that the reanalysis may have different variability to the real world.

I think analysis of residuals gives another way. If you have a temperature anomaly field T, you can try to separate it into a smoothed part s and a residual e:
T = s + e
If s is constructed in such a way that you expect much less uncertainty of interpolation than T, then the uncertainty has been transferred to e. That residual is meor intractable to integrate, but you have an upper bound based on its amplitude, and that is an upper bound to coverage uncertainty.

So below the jump, I'll show how I used a LOESS type smoothing for s. This replaces points but a low-order polynomial weighted regression, and the weighting is by a function decaying with distance, in my case exponentially, with characteristic distance t (ie exp(-|x}/r). With r very high, one can be very sure of interpolation (of s), but the approximation will not be very good, so e will be large, and contains a lot of "signal" - ie what you want to include in the average, which will then be inaccurate. If the distance is very small, the residual will be small too, but there will be a lot of noise still in s. I seek a compromise where s is smooth enough, and e is small enough. I'll show the result of various r values for recent months, focussing on Jan 2017. I'll also show WebGL plots of the smooths and residuals.

I should add that the purpose here is not to get a more accurate integral by this partition. Some of the desired integrand is bound to end up in e. The purpose is to get a handle on the error.

Thursday, March 16, 2017

GISS up by 0.18°C, now 1.1°C!

GISS has posted a report on February temperature, though it isn't in their posted file data yet. It was 1.10°C, up by 0.18°C. That rise is a bit more that the 0.13°C shown by TempLS mesh. It also makes February a very warm month indeed, as the GISS article says. It's the second warmest February in the record - Feb 2016 was at the peak of the El Nino. And it is equal to December 20165, which was also an El Nino month, and warmer than any prior month, of any kind.

I'll show the plot below the jump. It shows a lot of warmth in N America and Siberia, and cool in the Middle East.

As I noted in the previous post, TempLS had acquired a bug in the treatment of GHCN data that was entered and later removed (usually flagged). This sometimes caused late drift in the reported numbers. It has been fixed. Last month is up by 0.03°C on initial report.

Wednesday, March 15, 2017

Making an even SST mesh on the globe.

I have been meaning to tidy up the way TempLS deals with the regular lat/lon SST grid on the globe. I use ERSST, which has a 2x2° grid. This is finer than I need; it gives the sea much more coverage tha the land gets, and besides being overkill, it distorts near coasts, making them more marine. So I had reduced it to a regular 4x4 grid, and left it at that.

But that has problems near the poles, as you can see in this image:



The grid packs in lots of nodes along the upper latitudes. This is ugly, inefficient, and may have distorting effects in making the polar region more marine than it should, although I'm not sure about that.

So I looked for a better way of culling nodes to get a much more even mesh. The ideal is to have triangles close to equilateral. I have been able to get it down to something like this:



I don't think there is much effect on the resulting average, mainly because SST is still better resolved than land. But it is safer, and looks more elegant.

And as an extra benefit, in comparing results I found a bug in TempLS that had been puzzling me. Some, but not all, months had been showing a lot of drift after the initial publication of results. I found this was due to my system for saving time by storing meshed weights for past months. The idea is that if the station mix changes, the weights will be recalculated. But for nodes which drop out (mostly through acquiring a quality flag) this wasn't happening. I have fixed that.

Below the jump, I'll describe the algorithm and show a WebGL mesh in the new system.

Sunday, March 12, 2017

Residuals of monthly global temperatures.

I have frequently written about the task of getting a global average surface temperature as one of spatial integration, as here or here. But there is more to be said about the statistical aspect. It's a continuation of what I wrote here about spatial sampling error. In this post, I'll follow a path rather like ANOVA, with a hierarchy of improving approximations leading to smaller and more random residuals. I'll also follow through on my foreshadowed more substantial application of the new WebGL system, to show how the residuals do change over the surface.

So the aims of this post are:
  1. To see how various levels of approximation reduce the variance
  2. To see graphically how predictability is removed from the residuals. The idea here is that if we can get to iid residuals in known locations, that distribution should be extendable to unknown locations, giving a clear basis for estimation of coverage uncertainty.
  3. To consider the implications for accurate estimation of global average. If each approximation is itself integrable, then the residuals make a smaller error. However, unfortunately, they also become themselves harder to integrate, since smoothness is deliberately lost.
A table of contents will be useful:

Friday, March 10, 2017

January HADCRUT and David Rose.

Yet another episode in the lamentable veracity of David Rose and the Daily Mail. Sou covered a kerfuffle last month when Rose proclaimed in the Sunday Mail:

"The ‘pause’ is clearly visible in the Met Office’s ‘HadCRUT 4’ climate dataset, calculated independently of NOAA.
Since record highs caused last year by an ‘el Nino’ sea-warming event in the Pacific, HadCRUT 4 has fallen by more than half a degree Celsius, and its value for the world average temperature in January 2017 was about the same as January 1998."


This caused John Kennedy, of the Met Office, to note drily:



Rose was writing 19 Feb, and Hadcrut does indeed take much longer to come out. But it is there now, and was 0.741°C for the month. That was up quite a lot from December, in line with GISS (and Moyhu TempLS). It was a lot warmer than January 1998, at 0.495°C. And down just 0.33°C from the peak in Feb 2016.

And of course it was only last December that David Rose was telling us importantly that "New official data issued by the Met Office confirms that world average temperatures have plummeted since the middle of the year at a faster and steeper rate than at any time in the recent past".

In fact, January was warmer than any month since April 2016, except for August at 0.77°C.

Update. David Rose was echoed by GWPF, who helpfully provided this graph, sourced to Met Office, no less:

I've added a circle with red line to show where January 2017 actually came in. I don't know where their final red dot could have come from. Even November, the coldest month of the 2016, was 0.524°C, still warmer that Jan 1998.

Wednesday, March 8, 2017

Moyhu WebGL interactive graphics facility, V2.

As mentioned in the previous post, I've been working on a new version of a WebGL graphics facility that I first posted three years ago. Then it was described as a simplified access to WebGL plotting of data on a sphere, using the active and trackball facilities. It could work from a fairly simple user-supplied data file. I followed up with an even simpler grid-based version, which included a text box where you could just paste in the lat/lon grid data values and it would show them on an active sphere.

So now there is an upgrade, which I'll call V2. Again, it consists of just three files; an HTML stub MoyGLV2.html, a functional JavaScript file called MoyGLV2.js, and a user file, with a user name. The names and locations of the JS files are declared in the html. Aside from that, users just amend the user file, which consists of a set of data statements in Javascript. JS syntax is very like C, but the syntax needed here is pretty universal. The user files must be set before the MoyGLV2.js or equivalent in the HTML.

The main new features are:
  • The merging of the old grid input via a new GRID type, which only requires entry of the actual data.
  • An extension of the user input system that came with the grid facility. A variety of items can now be put in via text box (which has a 16000 char limit).
  • A multi-data capability. Each property entered can now be an array. radio buttons appear so that the different instances can be selected. This is very useful for making comparisons.
  • A flat picture capability. The motivation was to show spheres in 3D, but the infrastructure is useful for a lat/lon projection as well.
  • A compact notation for palettes, with color ramps.

I'll set out the data requirements below the jump, and some information on the controls (which haven't changed much. Finally I'll give a grid example, with result, and also below that the code for palette demo from the last post. The zip-file which contains code and example data is here. There is only about 500 lines of JS, but I've included sample data.


February global surface temperature up 0.106°C.

TempLS mesh posted another substantial rise in February, from 0.737°C to 0.843°C. This follows the earlier very similar rise of 0.09°C in the NCEP/NCAR index, and smaller rises in the satellite indices. Exceeding January, February was record warm by any pre-Nino16 standards.It was warmer (in anomaly) than any month before October 2015.

TempLS grid also rose by 0.11°C. The breakdown plot showed the main contributions from Siberia and N America, with Arctic also warm. The map shows those features, and also cold in the Middle East.







Sunday, March 5, 2017

Playing with palettes in WebGL earth plotting.

Three years ago, I described a simplified access to WebGL plotting of data on a sphere, using the active and trackball facilities. It could work from a fairly simple user-supplied data file. I don't know if anyone actually used it, but I certainly did. It is the basis for most of my WebGL work. I followed up with an even simpler grid-based version, which included a text box where you could just insert the lat/lon grid data values and it would show them on an active aphere.

I've been updating this mechanism, and I'll post a new version description in a few days, and also a more substantive application. But this post just displays a visual aspect that users may want to play with.

I almost always use rainbow palettes, and they are the default in the grid program. But they are deprecated in some quarters. I think they are the most efficient, but it is good to explore alternatives. One feature of the new system is that you can show and switch between multiple plots; another is that the textbox system for users to add data has been extended.

The plot below shows January 2016 anomalies, as I regularly plot here. On the top right, you'll see a set of radio buttons. Each will show the same plot in a different color scheme. The abbreviations expand in a title on the left when you click. They are just a few that I experimented with. The good news is, you can insert your own palettes. I'll explain below the plot.



As usual, the Earth is a trackball, and dragging right button vertically will zoom. Clicking brings up data for the nearest station. "Orient" rotates current view to map orientation.

In the new scheme, you can alter data by choosing the correct category in the dropdown menu top right, and then pasting the data into the text box, and then clicking "Apply". There is a shortened format for palettes. Colors are represented by an RGB triple between 0 and 1 (this is the GL custom). 0,0,0 is black, 1,0,0 is red. So you can enter a comma-separated set of numbers in groups of four. The first three are the RGB, and the fourth is the number of colors that ramp to the next one. The total length should be 256. The last set of four needs a final integer for format, but it can be anything. The series should be in square brackets, indicating a Javascript array. Here is a table of the data I used:

Array of dataDescription
[[1,0,0,64, 1,1,0,64, 0,1,0,64, 0,1,1,64, 0,0,1,64]Rainbow spectrum
[1,0,0,96, 1,1,0,80, 0,1,0,48, 0,1,1,32, 0,0,1,64]Rainbow tinged red
[1,0,0,32, 1,1,0,48, 0,1,0,80, 0,1,1,96, 0,0,1,64]Rainbow tinged blue
[1,0,0,64, 1,1,0,64, 1,1,1,64, 0,1,1,64, 0,0,1,64]Red, yellow white and blue
[1,0,0,128, 1,1,1,128, 0,0,1,1],Red white and blue
[0.62,0.32,0.17,128, 0.7,0.7,0.3,128, 0.4,.6,.2,1]Earth: brown to olive
[0.62,0.32,0.17,128, 0.7,0.7,0.3,104, 0.4,.6,.2,24, 0,0,1,1]Earthy with blue
[1,1,1,256, 0,0,0,1]White to Black


You can enter a similar sequence in the text box and see what it looks like. It will replace the currently selected palate. You can even change the button label by selecting "short", or the label top left by selecting "long", in each case entering your phrase with quotes in the text box.





Friday, March 3, 2017

NCEP/NCAR rises again by 0.09°C in February

It's getting very warm again. January was warmer than any month before October 2015 in the MOyhu NCEP/NCAR reanalysis index. February was warmer again, and is warmer than Oct/Nov 2015, and behind only Dec-April in the 2015/6 El Nino. And it shows no downturn at end month.

Karsten also had a rise of 0.1°C from GFS/CFSR. UAH V6 is up by 0.05°C. And as I noted in the previous post, Antarctic sea ice reached a record low level a few days ago.



Wednesday, March 1, 2017

Record low sea ice minimum in Antarctica

I've been tracking the Antarctic Sea Ice. It has been very low since about October, and a new record looked likely. Today I saw in our local paper that the minimum has been announced. And indeed, it was lowest by a considerable margin. The Moyhu radial plot showed it thus:



The NSIDC numbers do show that there is still melting, but if so, it won't last much longer. The Arctic seems to be at record low levels again also, which may be significant for this year's minimum.



Thursday, February 16, 2017

GISS rose 0.13°C in January; now 0.92°C.

Gistemp rose from 0.79°C in December to 0.92°C in January. That is quite similar to TempLS mesh, where the rise has come back to 0.094°C. There were also similar rises in NCEP/NCAR and the satellite indices.

As with the other indices, this is very warm. It is almost the highest anomaly for any month before Oct 2015 (Jan 07 at 0.96°C was higher). And according to NCEP/NCAR, February so far is even warmer.


I'll show the regular GISS plot and TempLS comparison below the fold

Wednesday, February 15, 2017

Changes to Moyhu latest monthly temperature table.

A brief note - I have changed the format of the latest monthly data table. The immediate thing to notice is that it starts with latest month at the top.

Before there were two tables - last six months of some commonly quoted datasets, and below a larger table of data back to start 2013, with more datasets included. This was becoming unwieldy.

Now there is just one table going back to 2013, but starting at the latest month, so you have to scroll down for earlier times. It has those most commonly quoted sets, but there are buttons at the top that you can click to get a similar table of other subsets. "Main" is the start table; "TempLS" has a collection (coming) of other styles of integration, and also results with adjusted GHCN.

I'm gradually moving RSS V4 TTT to replace the deprecated V3 TLT. I'm still rearranging the order of columns somewhat. There is reorganisation happening behind the scenes.





Monday, February 13, 2017

Spatial distribution of flutter in GHCN adjustment.

I posted recently on flutter in GHCN adjustment. This is the tendency of the Pairwise Homogenisation Algorithm (PHA) to produce short-term fluctuations in monthly adjustments. It arose on a recent discussion of the kerfuffle of John Bates and the Karl 2015 paper, and has been investigated by Peter O'Neill, who is currently posting on the topic. In my earlier post, I looked at the distribution of individual month adjustments, and noted that with generally zero mean, they would be heavily damped on averaging.

But I was curious about the mechanics, so here I compare the same two adjusted files (June 2015 and Feb 9 2017) collected by station. I'll show a histogram, but more interesting is the spatial distribution shown on a trackball sphere map. The histogram shows a distribution of station RMS values tapering rather rapidly toward 1°C. The map shows the flutter is strongly associated with remoteness, especially islands.

Update: I have now enabled clicking to show not only the name of the nearest station, but the RMS adjustment change there in °C. I have also adopted William's suggestion about the color scheme (white for zero, red for large).

Friday, February 10, 2017

January global surface temperature up 0.155°C.

TempLS mesh rose significantly in January, from 0.66°C to 0.815°C. This follows the earlier very similar rise of 0.13°C in the NCEP/NCAR index, and rises in the satellite indices, including a 0.18°C rise in the RSS index. January was the warmest month since April, and as with NCEP/NCAR, it was warmer (in anomaly) than any month before October 2015.

TempLS grid also rose by 0.12°C. I think this month temperatures were not greatly affected by the poles. The breakdown plot was interesting, with contributions to warmth from N America, Asia, Siberia and Africa, with Arctic also warm as usual lately.


Thursday, February 9, 2017

Flutter in GHCN V3 adjusted temperatures.

In the recent discussion of the kerfuffle of John Bates and the Karl 2015 paper, the claim of Bates that the GHCN adjustment algorithm was subject to instability arose. Bates claim seemed to be of an actual fault in the code. I explained why I think that is unlikely, but rather it is a feature of the Pairwise Homogenisation Algorithm (PHA).

GHCN V3 adjusted is issued approximately daily, although it is not clear how often the underlying algorithm is run. It is posted here - see the readme file and look for the qca label.

Paul Matthews linked to his analysis of variations in Alice Springs adjusted over time. It did look remarkable; fluctuations of a degree or more over quite short intervals, with maximum excursions of about 3°C. This was in about 2012. However Peter O'Neill had done a much more extensive study with many stations and more recent years (and using many more adjustment files). He found somewhat smaller variations, and of frequent but variable occurrence.

I don't have a succession of GHCN adjusted files available, but I do have the latest (downloaded 9 Feb) and I have one with a file date here of 21 June 2015. So I thought I would look at differences between these to try to get an overall picture of what is going on.

Friday, February 3, 2017

NCEP/NCAR January warmest month since April 2016.

The Moyhu NCEP/NCAR index at 0.486°C was warmer than any month since April 2016. But it was a wild ride. It started very warm, dropped to temperatures lower than seen for over a year, rose again, and the last two weeks were very warm, and it still is. The big dip coincided with the cold snap in E North America, and in Central and East Europe, extending through Russia. Then N America warmed a little, although some of Europe stayed cold, and there was the famous snow in the Sahara. Overall, Arctic and Canada (despite cold snaps) were warm, as was most of Asia. Europe and the Sahara were indeed cold.

I'll note just how warm it still is, historically. I don't make too much of long term records in the reanalysis data, since it can't really be made homogeneous. But January was not only warmest since April, but warmer (by a lot) than any month prior to October 2015.

UAH satellite, which dropped severely in December, rose a little, from 0.24°C to 0.30°C. Arctic sea ice, which had been very low, recovered a bit to be occasionally not the lowest recorded for the time of year. Antarctic ice is still very low, and may well reach a notable minimum.

With the Arctic still warm, I would expect a substantial rise for GISS and TempLS mesh, with maybe less for HADCRUT and NOAA.

Wednesday, February 1, 2017

Homogenisation and Cape Town.

An old perennial in climate wars is the adjustment of land temperature data. Stations are subject to various changes, like moving, which leads to sustained jumps that are not due to climate. For almost any climate analysis that matters, these station records are taken to be representative of some region, so it is important to adjust for the effect of these events. So GHCN publishes an additional list of adjusted temperatures. They are called homogenised with the idea that as far as can be achieved, temperatures from different times are as if measured under like conditions. I have written about this frequently, eg here, here and here.

The contrarian tactic is to find some station that has been changed and beat the drum about rewriting history, or some such. It is usually one where the trend has changed from negative to positive. Since adjustment does change values, this can easily happen. I made a Google Maps gadget here which lets you see how the various GHCN gadgets are affected, and posted histograms here. This blog started its life following a classic 2009 WUWT sally here, based on Darwin. That was probably the most publicised case.

There have been others, and their names are bandied around in skeptic circles as if they were Agincourt and Bannockburn. Jennifer Marohasy has for some reason an irrepressible bee in her bonnet about Rutherglen, and I think we'll be hearing more of it soon. I have a post on that in the pipeline. One possible response is to analyse individual cases to show why the adjustments happened. An early case was David Wratt, of NIWA on Wellington, showing that the key adjustment happened with a move with a big altitude shift. I tried here to clear up Amberley. It's a frustrating task, because there is no acknowledgement - they just go on to something else. And sometimes there is no clear outcome, as with Rutherglen. Reykjavik, often cited, does seem to be a case where the algorithm mis-identified a genuine change.

The search for metadata reasons is against the spirit of homogenisation as applied. The idea of the pairwise algorithm (PHA) used by NOAA is that it should be independent of metadata and rely solely on numerical analysis. There are good reasons for this. Metedata means human intervention, with possible bias. It also inhibits reproducibility. Homogenisation is needed because of the possibility that the inhomogeneities may have a bias. Global averaging is very good at suppressing noise(see here and here), but vulnerable to bias. So identifying and removing possibly biased events is good. It comes with errors, which contribute noise. This is a good trade-off. It may also create a different bias, but because PHA is automatic, it can be tested for that on synthetic data.

So, with that preliminary, we come to Cape Town. There have been rumblings about this from Philip Lloyd at WUWT, most recently here. Sou dealt with it here, and Tamino touched on it here, and an earlier occurrence here. It turns out that it can be completely resolved with metadata, as I explain at WUWT here. It's quite interesting, and I have found out more, which I'll describe below the jump.

Tuesday, January 31, 2017

A guide to the global temperature program TempLS

TempLS is an R program that I have been running for some years at Moyhu. It computes a land/ocean temperature average in the style of Gistemp, HADCRUT or NOAA (see Wiki overview) In this post, I want to collect links to what I have written about it over the years, describe the methods, code and reporting, and something about the graphics. I'll then promote it to a maintained page. I'll start with a Table of Contents with links:
  • Introduction - reporting cycle
  • Summary, methods and code
  • History
  • Graphics and other output
  • Tests and comparisons

Friday, January 27, 2017

Global anomaly spatial sampling error - and why use anomalies?

In this post I want to bring together two things that I seem to be talking a lot about, especially in the wake of our run of record high temperatures. They are
  • What is the main component of the error that is quoted on global anomaly average for some period (month, year)? and
  • Why use anomalies? (an old perennial, see also GISS, NOAA)
I'll use the USHCN V2.5 dataset as a worked example, since I'm planning to write a bit more about some recent misuse of that. In particular I'll use the adjusted USHCN for 2011.

Using anomalies

I have been finding it necessary to go over some essentials of using anomalies. The basic arithmetic is
  • Compute some "normal" (usually a 30-year period time average for each month) for each station in the network,
  • Form local anomalies by subtracting the relevant normal from each reading
  • Average the anomalies (usually area-weighted)
People tend to think that you get the aanomaly average just by averaging, then subtracting an offset. That is quite wrong; they must be formed before averaging. Afterward you can shift to a different anomaly base by offsetting the mean.

Coverage error - spatial sampling error for the mean.

Indices like GISS and HADCRUT usually quote a monthly or annual mean with an uncertainty of up to 0.1°C. In recent years contrarians have seized on this to say that maybe it isn't a record at all - a "statistical tie" is a pet phrase, for those whose head hurts thinking about statistics. But what very few people understand is what that uncertainty means. I'll quote here from something I wrote at WUWT:

The way to think about stated uncertainties is that they represent the range of results that could have been obtained if things had been done differently. And so the question is, which "things". This concept is made explicit in the HADCRUT ensemble approach, where they do 100 repeated runs, looking at each stage in which an estimated number is used, and choosing other estimates from a distribution. Then the actual spread of results gives the uncertainty. Brohan et al 2006 lists some of the things that are varied.

The underlying concept is sampling error. Suppose you conduct a poll, asking 1000 people if they will vote for A or B. You find 52% for A. The uncertainty comes from, what if you had asked different people? For temperature, I'll list three sources of error important in various ways:

1. Measurement error. This is what many people think uncertainties refer to, but it usually isn't. measurement errors become insignificant because of the huge number of data that is averaged. measurement error estimates what could happen if you had used different observers or instruments to make the same observation, same time, same place.

2. Location uncertainty. Ths is dominant for global annual and monthly averages.You measured in sampled locations - what if the sample changed? You measured in different places around the earth? Same time, different places.

3. Trend uncertainty, what we are talking about above. You get trend from a statistical model, in which the residuals are assumed to come from a random distribution, representing unpredictable aspects (weather). The trend uncertainty is calculated on the basis of, what if you sampled differently from that distribution? Had different weather? This is important for deciding if your trend is something that might happen again in the future. If it is a rare event, maybe. But it is not a test of whether it really happened. We know how the weather turned out.


So here I'm talking about location uncertainty. What if you had sampled in different places. And in this exercise I'll do just that. I'll choose subsets of 500 of the USHCN and see what answers we see. That is why USHCN is chosen - there is surplus information from the dense coverage.

Why use anomaly?

We'll see. What I want to show is that it dramatically reduces location sampling error. The reason is that the anomaly set is much more homogeneous, since the expected value everywhere is more or less zero. So there is less variation in switching stations in and out. So I'll measure the error with and without anomaly formation.

USHCN example

So I'll look at the data for the 1218 stations in 2010, with an anomaly relative to the 1981-2010 average. In a Monte Carlo style, I make 1000 choices of 500 random stations, and find the average for 2011, first by just averaging station temperatures, and then the anomalies. The results (in °C) are:

Base 1981-2010, unweighted ..Mean of means ..s.d. of means
Temperatures11.8630.201
Anomalies0.1910.025


So the spatial error is reduced by a factor of 8, to an acceptable value. The error of temperature alone, at 0.201, was quite unacceptable. But anomalies perform even better with area-weighting, which should always be used. Here I calculate state averages and then area-weight the states (as USHCN used to do):

Update: I had implemented the area-weighting incorrectly when I posted about an hour ago. Now I think it is right, and the sd's are further reduced, although now the absolute improves by slightly more than the anomalies.

Base 1981-2010, area-weighted ..Mean of means ..s.d. of means
Temperatures12.1020.137
Anomalies0.1010.016


For both absolute T and anomalies, the mean has gone up, but the SD has reduced. In fact T improves by a slightly greater factor, but is still rather too high. The anomaly sd is now very good.

Does the anomaly base matter? A little, which is why WMO recommends the latest 3 decade period. I'll repeat the last table with the 1951-90 base:

Base 1951-80, area-weighted ..Mean of means ..s.d. of means
Temperatures12.1030.138
Anomalies0.6200.021

The T average is little changed, as expected. The small change reflects the fact that sampling 1000 makes the results almost independent of that random choice. But the anomaly mean is higher, reflecting warming. And the sd is a little higher, showing that subtracting a slightly worse estimate of the 2011 value (the older base) makes a less homogeneous set.

So what to make of spatial sampling error?

It is significant (with 500 station subsets) for anomaly, and the reason why large datasets are sought. In terms of record hot years, I think there is a case for omitting it. It is the error if between 2015 and 2016 the set of stations had been changed, and that happened only to a very small extent. I don't think the theoretical possibility of juggling the station set between years is an appropriate consideration for such a record.

Conclusion

Spatial sampling, or coverage error for anomalies is significant for ConUS. Reducing this error is why a lot of stations are used. It would be an order of magnitude greater without the use of anomalies, because of the much greater inhomogeneity, which is why one should never average raw temperatures spatially.