Sunday, December 25, 2011

Optimism for Christmas - A grand Gift

Most of the energy issues that face our world are repairable in proper steps. All it takes is optimism and intelligence. I have been worried for some time because the pessimists have been in charge.

There is a fine line between optimism and pessimism, which is called realism. Nothing good results from too much of any of them, even realism. Even a realist needs a little craziness from time to time.

Municipal scale cooperative utilities are my crazy vision of the future. Combining power generation, waste disposal, water treatment and production in efficient co-generation to get the maximum benefit for the community for the buck.

No more NIMBY mentality. Deal with your own shit on your own turf. The technology is available now to start on that path. There are lots of great ideas that have waited for their time. Which ones will win depends on the needs and desires of the community.

So I will be digging through some of the better ones I have and add a few of my own not that it looks like the rough patch is getting shorter.

Happy Holidays and a prosperous future.

Saturday, December 24, 2011

What Just Happened? Is Hydrogen Back in the Picture?

The EPA made a politically timed announcement that the Maximum Achievable Clean Technology (MACT) is now in force in the the United States. Under the guise of getting Mercury pollution from nasty coal fired power plants finally under control, the MACT will have impact on about 10 percent of the older coal power plants with 12 percent of the currently operating power plants already meeting the MACT tighter standards. While the Greens strut around proclaiming victory over nasty coal, the MACT seems to endorse clean coal technology, or cleaner coal technology if you prefer.

As usual, the industries that will bear the brunt of the regulation will not be the target mentioned in the media hype. Forestry and pulp products, smaller scale industrial power generation and institutional (university and military) power and thermal plants will have to get out of the power business.

Pulp mills have worked hard the past 20 years to bring emissions under control to meet the demands of encroaching residential property owners that build homes near pulp plants. Hey, the land was cheap for a reason guys.

It is all good, other than the suburban sprawl started the ball rolling. Cleaner emissions generally mean more efficient energy use.

Integrated Gasification combined cycle power generation, the cleaner coal technology, meets the EPA regulations which opens the door to a variety of mixed fuel and synergistic industrial applications. Only problem is, will the small guys feel the boot of big government and be driven out of the picture?

I haven't posted on this blog in quite some time because nothing has happened. MACT may be a big something. With some reasonable assurance that the rules are not going to change for the 50 years or so required to invest in new coal and unconventional fuel technology, the EPA may have unleashed the innovative potential of American entrepreneurs. The tide may have turned!

Monday, October 24, 2011

Simple Versus Too Simple

Making the complex simple to understand is the goal of science, any discipline really. That goal often requires compromises where one portion of the overall concept is attempted to be explained by analogy to a commonly understood concept.

Physics uses many basic analogies, Carnot Engines, Equilibrium and adiabatic processes, as foundations even though none may ever exist. They are convenient models of perfection for comparison.
In atmospheric physics, the dry adiabatic lapse rate, where temperature changes with pressure with no gain or loss to the system, is an example of an equilibrium state with prefect energy transfer, a Carnot engine. Perfection does not exist in nature, it can only be approached.

The dry adiabatic lapse rate in Earth’s atmosphere is the combination of the surface temperature, the composition of the gases in the atmosphere, the molecular weight of the gases, the thermal properties of the gases, the gravitational constant and radiant energy interaction with the changing density and composition of gases compressed by gravity. A rather complicated process we on the surface take for granted.

If you are in favor of electrical analogies, the adiabatic lapse rate is an inductive load with a steady state current. Small changes in current are dampen by properties of the inductor and rapid change produces huge changes in the potential energy or electromotive force realized across the inductive load.

The electromotive force is provided not by a single source, but several, a conductive battery, a latent battery, a gravitational battery and a radiant battery are the more significant power sources.

The radiant battery is both solar and black body, with cells poorly designed for the task, but adequate in steady sate conditions. In steady state, the potential can be determined at different points in the atmospheric circuitry and the total accurately calculated from one connection to the next. i.e. if we know the voltage and current into a black box and the current and voltage out of that black box, we can determine to a point what circuitry is in the box. With more than one condition, we can better describe the inner circuitry.

The currents are in parallel from the electromotive sources at the surface, Fc, Fl, Fr, and F?, for conductive, latent, radiant and the question mark is ever present uncertainty. Each of the batteries providing these currents or fluxes, have cells, Fra, Frb, Frc …Frn, for example. The subscript letters can be individual wavelengths, associated energies, or combinations of wavelengths and energies that impact portions of the atmosphere.

This is the simplicity of the Kimoto equation, dF/dT=4(aFc+bFl+cFr+…F?)/T, which is derived from Stefan’s equation, Fi/Fo=alpha(Ti)^4/alpha(To)^4, or the change in energy flux of a body is proportional to the change in temperature of the body at initial temperature T. All the coefficients, a,b,..n, represent changes to the flux through the atmospheric inductor or impedance.

Proper use of this simple equation requires, proper consideration of the flux values and ever present uncertainty.

Sunday, October 23, 2011

New Blog For Easier Navigation

This has been my trash blog for a long time. Random thoughts on random subjects. For the Climate Change crowd, I have started a new blog to try and better organize things. You need to have a common starting point to see how a complex set of feedbacks and natural responses combine into a very interesting balance.

The New Blog, CaptDallas' Redneck Theoretical Physics Forum. There is quite a bit of pun intended in the title and the attitude. The curious may find it interesting.


For the insomniacs in the crowd, The Energy Budget of the Polar Atmosphere in MERRA is a nice light read. There are pretty significant descrepancies that more than cover the conductive issue I am trying to quantify. The devil is in the details, but adapting the equation should shed some light on the issue. Poor satellite coverage is not helping at all. There is a significant lead in surface change over down welling change though that is interesting.

Saturday, October 22, 2011

Carbon Dioxide- A Not so Well Mixed Gas

In an atmosphere without significant water, carbon dioxide would be a very well mixed gas. Earth’s atmosphere has water in all phases and at different concentrations. This greatly complicates solutions for the changes in relative conductive and radiant properties of the atmosphere.

Carbon dioxide rains out in areas with high humidity and precipitation. The rate of diffusion varies with temperature and pressure from well mixed gas ratio to regions where CO2 is depleted via rain out. Using global averages provides good results, but for regional evaluation, the changes and rates of change in CO2 must be considered.

The Antarctic with its low precipitation rate and very cold climate offers a baseline for CO2 change in the overall atmosphere. It is in the Antarctic where the impact of CO2 on conductive flux is most evident and the impact on radiant flux more over estimated. The blend of underestimated conductive change and over-estimated radiant change are uniquely Antarctic.

While theories are plentiful, the reality is hard to determine. Sublimation cannot be completely ruled out on a microscopic scale, due to conditions available between the Antarctic Tropopause and the surface temperatures and pressures.

The exact psychometric relationships will require a great deal of further study. However, as tropospheric temperatures can approach -95C and the temperature and pressures of the Antarctic can be less than -60C at 1020mb, microscopic sublimation is possible provided a deposition substrate of a few atoms can be found. Microscopic carbonic snow, an interesting theory for idle moments.

Carbon dioxide concentration lags between Antarctic and Mona Loa would be much more easily analyzed.'

With a reliable estimate of the changes in carbon dioxide change, the Poisson Equation can be adjusted to the specified thermal properties of the atmosphere regionally, adding greatly to the utility of the Kimoto equation.

Thursday, October 20, 2011

Another Shot at Explaining the Atmospheric Effect

I found a dedication quote in response to this question:
Dallas: "Do you actually believe that down welling long wave radiation is nearly twice solar?

"Yes, I do, because that’s what the measurements show and that’s what’s required to close the energy balance. See SURFRAD data, for example ( http://www.srrb.noaa.gov/surfrad/aod/aodpick.html )."

That's what's required? A perfect display of biased perception. The reason I am stating what should be obvious to inquisitive minds.


Carbon dioxide in the atmosphere both warms and cools. This is nothing new. The fear has been it will warm more than cool. At times it will. The relationship is complex.

While I would prefer to move on to other interests, I am asked why and how I may know this. The truth is the Kimoto equation is a very valuable tool for quickly testing relationships between radiant, conductive and latent thermal fluxes in the atmosphere. Simply, it works. How well, I am still working on that.

With the equation dF/dT=4(0.33Fc+1.09Fl+0.825Fr)/T, where Fc is the conductive, Fl is the latent and Fr is the radiant thermal fluxes from the surface at 288K, it is easy to use the standard values available from NASA to do “what ifs” to your heart’s content. That and a basic understanding of thermodynamics, is all it takes to see what is happening.

The basic thermodynamics should be obvious. If surface warming is due to Fr being restricted, the other two fluxes will increase as temperature increases. Water vapor increase is well known, but the increase in conduction seems to have been over looked. It will increase. That is a cooling effect.

Perhaps, the confusion is in the values, 0.33, 1.09 and 0.825? These are the values determined from the steady state condition of the Earth at 288K and 390Wm-2 associated with the 288K by the relationship of a black body’s radiant energy via Stefan’s Law. If the steady state values, 24Wm-2 for conductive, 79Wm-2 Latent and 390-24-79=287 radiant are equal to 0.33Fc, 1.09Fl and 0.825Fr are correct, be my guest and check my work, then you can determine roughly what and how much each value will change. It is easier to see if you consider what would change.

Fr, is the total of all surface radiation after allowing for conductive and latent cooling. Fr, includes both the energy absorbed by the atmosphere with greenhouse gases, and the energy eventually lost directly to space through the atmospheric window, and the matching up welling energy for the down welling atmospheric effect, or greenhouse effect. The radiant energy absorbed by the atmosphere is approximately 80 Wm-2 that can be determined by looking at the NASA Earth Energy Budget drawing where they have clearly shown how incoming solar energy is matched by outgoing combined conductive, latent and radiant flux. The remainder, 287-80=207 is the approximate greenhouse effect. Depending on which source drawing you use, NASA or the Keihl & Trenberth drawings, the 207 varies to approximately 220 Wm-2. Small change, but the values are approximate.

The coefficients are "Effective” values in that they, effect the atmospheric absorption. The 207 to 220 is a balancing force that would vary only if the effects of the three thermal fluxes increase the surface temperature. Then the 207-220 would increase to balance the atmospheric effect.

If you look at the top of the atmosphere, you will see that the solar absorbed by the atmosphere and clouds plus the solar absorbed by the surface is roughly 240Wm-2, The total absorbed by the atmosphere OLR from the surface and incoming solar equals roughly 240Wm-2 and the total leaving from the atmosphere is equal to roughly 240Wm-2. That is the energy balance. The 207 to 220 is the value of the greenhouse effect and is internal to the system.

This value is different from the classic top of the atmosphere value of 390-240=160Wm-2 sometimes noted as 155Wm-2 depending on the initial values used. That flux value corresponds to the 33C warmer the Earth is considered to be because of the combined atmospheric effects, conductive, latent and radiant energy transferred to the atmosphere from the surface to become the potential energy holding the atmospheric gases above the surface in opposition to the gravity attempting to pull them back to the surface. It is higher because the efficiency of the work done and the opacity of the atmosphere varies with pressure.

Everything balances, which is the desired result if you are attempting an Energy Balance of the Earth. The surface, the atmosphere, the top of the atmosphere and the potential energy of the atmosphere, the atmospheric effect, all of these are considered with these values. There are of course small differences due to rounding and uncertainty, but everything is in reasonable balance.

If there is more warming of the atmosphere, the greenhouse effect is getting warmer, the coefficients of surface fluxes, Fc, Fl and Fr increase. That would add to the potential energy of the atmosphere and have to be balanced by an increase of the 207-220 Wm-2.

The hard part for some to grasp, is that increased atmospheric absorption reduces the potential energy difference between the surface to the atmosphere, reducing heat transfer to the atmosphere, with some exceptions, causing interesting feedbacks. These are the rather complex feedbacks to the warming surface. Clouds both absorb more from the surface and reflect more solar from above. CO2 above the clouds retain more heat which warms the cloud tops first, which tends to increase convection at the upper troposphere. More CO2 improves the conductivity which allows more efficient heat transfer from the surface to the lower troposphere. The impacts of these feedbacks vary from region to region.

The tropics are virtually saturated for all three heat fluxes. More radiant warming above the clouds increases convection which increases latent cooling, winds increase and precipitation tends to cool the surface, offsetting warming. The southern pole is temperature limited due to angle of inclination, increased conduction balances increased radiant forcing resulting in little surface temperature change. It is in the Northern polar and subtropical region where radiant forcing impacts the surface temperature the most.

Since increased CO2, impacts a relatively small portion of the radiant spectrum at the surface, the radiant energy flux in the atmospheric window to space increases, which does increase surface warming somewhat, but is limited by near saturation of the CO2 portion of the surface radiant window. Higher in the troposphere, the atmospheric window helps cool the cloud tops warmed by CO2 forcing.

It is a complex system with many more feedbacks than commonly discussed in the literature. The conductive impact and the downward opacity to increased infrared forcing are virtually ignored and crucial for understanding the atmospheric effects. Minimum Local Emissivity Variations are just being evaluated to improve the accuracy of satellite telemetry and surface down welling radiation monitoring plagued with inaccuracy.

Sometimes simple equations are much more valuable for analyzing a complex problem than millions of hours of computer modeling.

Now, try the equation and look out the window.

What? Need more information?

Then let us start at the beginning.

The Earth’s Virgin atmosphere.

If the Earth had no atmosphere, if it were just floating in space minding its own business, the surface temperature would be about 278 degrees K or about five degrees above zero on average. That is because the sun warms the Earth half the time with 340 Wm-2 of energy. If the Earth had snow on the surface that reflected a portion of this energy it would be colder as less solar energy would be absorbed.

So if 30% of the sunlight were reflected, the average temperature would be about 255K which is 18 degrees C below zero. The Earth though has an abundance of nitrogen and oxygen, gases that have a small but significant thermal coefficient 0f 0.025W/m-2.K at 20 degrees C and about 0.024W/m-2.K at -18 degrees C. So even at the colder temperature, the virgin Earth would have surface heat transferred to the atmosphere by conduction. We would have an atmosphere, even without greenhouse gases. Those interested may wish to read up on the ideal gas laws and visit the Engineering Toolbox dot com.

This poses a bit of a challenge for what the virgin albedo of the Earth would be, would the energy be reflected from the surface, the atmosphere or both? Both, is the obvious answer. Why, because nitrogen and oxygen scatter some electromagnetic radiation, absorb some and certain wavelengths cause chemical changes, like O2, oxygen, being split by ultraviolet light and recombining as O3, ozone. This is a little more complicated, but the Engineering Tool box has the information, which should be common knowledge for scientists involved in atmospheric physics.

In addition, the Earth has plenty of water which at the equator would not only be liquid, but evaporate, adding water vapor to the atmosphere. Even if the water vapor had no interaction with outgoing longwave radiation from the surface, it would still interact with incoming solar. The virgin Earth would have a Tropopause, or an inversion if atmospheric temperature cooled from below by the release of radiant energy from the water vapor and conductive energies dissipating to space and warmed from above by solar interaction with oxygen and ozone.

With part of the albedo or reflection of solar energy being in the virgin atmosphere, the surface temperature would be approximately 2 degrees C different, depending on the ratio of surface to atmospheric absorption. This is what a no greenhouse gas Earth atmosphere would be, not a rock in space with no atmosphere at all, a planet with a simple atmosphere that obeys the principals of physics.


The Surface-Atmosphere Solar Absorption Ratio

Without getting into too much detail, the ratio of the solar energy absorbed by the atmosphere versus the surface defines the atmospheric effect. This balance or ratio varies to control the surface temperature. Change the radiant energy forcing, throws that balance off requiring the Earth and Atmosphere to seek a new equilibrium state. This is “Enhanced” Greenhouse Effect aka Global Warming, aka Climate Change aka Climate disruption. Understanding starts with the natural ratio and how it will be changed.

Readers with some experience in thermodynamics will have noted that the description of the Virgin Atmosphere provides three main frames of reference, the surface, the Tropopause and the Top of the Atmosphere (TOA). Properly balanced from one frame of reference, all frames of reference can be described. That is a simple check to verify the accuracy of your solution, Thermo 101 stuff.


The Solar Ratio and Impact of Conductive Heat Transfer

The basic model of the Virgin Earth Atmosphere is very educational. Conductive heat transfer is responsible for most of the atmospheric effect, latent cooling balances the conductive heat transfer and generates indirectly the clouds that maintain the solar absorption ratio. A beautifully simple and elegant relationship. The Radiant component of heat transfer enhances the conductive/latent relationship, it does not dominate the relationship.

Of the 240Wm-2 of solar absorbed by the Earth system, approximately 175Wm-2 is absorbed by the surface and 65 Wm-2 is absorbed by the atmosphere. This is an important ratio, 0.37 approximately. If you are curious, you would notice that the ratio of conductive to latent surface flux is 24/79 or approximately 0.30. If you are rally curious you would investigate the sensible portion of latent cooling, combine that with the conductive flux which is a sensible heat transfer, and find that( 24+5)/74 = 0.39. The surface response attempts to balance the solar impact. How these two ratios vary with respect to each other would determine if the surface is warming or cooling, GHGs enhances this relationship. The values used are approximations, but accurately calculated, the relationship would hold true.

So how does CO2 enhance the atmospheric effect?

At the surface, CO2 is a more efficient conductor of thermal energy both as a radiant absorber and as a conductive gas. Co2 readily absorbs surface thermal energy and transfers that energy to the nitrogen and oxygen in the atmosphere. It is the inefficient heat transfer of nitrogen and oxygen that causes the atmospheric effect. Thermo 101 again, if nitrogen and oxygen were perfect conductors of thermal energy there would be no energy transferred to the atmosphere. CO2 improves the conductivity, but does not make it perfect. Also, CO2 has a non-linear thermal conductivity, at 20C it is 0.09, nearly four times as conductive as N2 and O2 and at -20C it is 0.12, that is nearly a full order of magnitude greater than N2 and O2. Not an insignificant difference even at trace gas quantities. While this conductive impact is often assumed to be negligible, the Antarctic temperature response appears to believe otherwise.


Why is this the right way?


Starting at on a solid thermodynamic base allows for double checking all values. Then differences, even subtle differences can have meaning. Something missed, something new or some silly mistake that is confusing the issue. The conductive portion of the atmospheric effect is fairly constant with temperature with a stable humidity. Conductive flux is directly related to surface pressure, a solid base value that would be simple to determine globally. The latent energy is more variable, but extensively monitored by satellite and surface stations. With solid data for conductive and latent, radiant flux can be accurately calculated, far more accurately that direct measurement by satellite and ground stations. This provides a method to check methods, which is very important in a dynamic system.

So why are the satellites and surface stations measuring radiant down welling flux so far off?

Because temperature is related to radiant flux and neither are stable in the atmosphere, they are dynamic. Changes in humidity, and conductive efficiency impact already limited accuracy of direct measurement of thermal flux. The infrared pyrometers are designed to read temperatures by the approximation of the black body temperature of the object being tested. Atmospheric gases change temperature, density, composition continuously with the weather, why would their radiant energy flux be easy to measure? It is much easier to measure the average temperature of a layer of the atmosphere than it is to measure its energy flux emitted in all directions.

Where the satellites and ground stations are inaccurate is more informative than where they are accurate. Anomalies are the teachers.

Why am I so excited by the Flux measurement anomalies?

The anomalies appear to be indications of relativistic effects in the atmosphere! That is exciting if true. Effects typically only measurable under strict laboratory conditions may be apparent in the petaWatt per sec surface and atmosphere energy exchanges involving peta^n collisions and absorptions of photons as they travel from the surface to space. Something lost so far to science because of a silly erroneous assumption that data must fit preconceived notions. An interesting possibility.

Applications?

The most obvious is that the potential temperature of air at 600mb is a good indicator of changes in radiant forcing versus atmospheric response, aka feedbacks. With 600mb as a base value, the potential temperatures at varying altitudes would be a simple metric for modeling changes in thermal flux interaction at various atmospheric layers. Simple, IF, the base pressure has a physical relationship to Down Welling Long Wave Radiation.

Since the ratio of surface to atmospheric absorption of incoming solar irradiance is an indication of the atmospheric effect, comparisons of solar reconstructions with surface temperature reconstructions can be more informative. Now that it is known that the spectral bands of solar irradiance change more at ends of the spectrum than uniformly across the spectrum, the impact of the individual spectral changes on the atmosphere and surface, (read Oceans) can better explain the solar to temperature relationship.

Conductivity changes, though small, can be better studied to evaluate the Antarctic versus Arctic discrepancy, which is a valuable clue, not an instrumentation anomaly.

In short, the correct frame of reference can make a huge difference in understanding a complex system.

Wednesday, October 19, 2011

Phonon Versus Photon Research List

Since computers tend to crash, especially in humid enviroments like the Florida Keys, I am building a research list for the Phonon versus photon thing to keep online. Most of what I am looking for is Minimum Local Emissivity Variance.

"In summary, the approximate 1 RU bias between the AERI and the LBLRTM in clear sky conditions is probably not due to calibration errors in the instrument, but is most likely atmospheric absorption that is not accounted for in the calculation."
, David Taylor, University of Milwalkee, Madison.

1 RU is approximately 20K BTW. Maybe something maybe not. The paper is a doctorial thesis which are often very readable and informative. Interesting list of references, Curry, Lindzen.

One interesting thing is the Phonon is a not necessarily a particle, but an enchange of energy via vibrational excitation. In the atmosphere density, connective tissue so to speak of the gas molecules, would be collision or compression, collisional transfer probably, but the micro-shock waves of sound is a possibility. Pretty far fetched, but interesting.

The Powerpoint presentation by Superluminal Quantum is cool. Still in the massless mode where my potential model would have the smallest possible mass as quanta. They are closer to correct I supposed, but I like mass, even if is on the order 10^42/10^35 per quantum. Of course, my mass would only exist if the photon collapsed or if the photon splintered where the fragments could not maintain angular momentum. It would still travel like a woofleball, jitter and may have a maximum local velocity c*2^.5, don't know yet.

The Relativity Series Begins Under Cosmic Puzzles

Relativity Simplified?

The past masters of Classical Physics determined that there was some barrier that had to be considered for explanations of our universe to be accurate. Some little something that only was significant a certain times, velocities, densities, temperatures etc. Everything in physics made sense, but only to certain points, then descriptions tended to fall apart. Something was missing.

Einstein determined that the ultimate barrier was the speed of light, http://en.wikipedia.org/wiki/Theory_of_relativity, mass for example, approaches infinity as its velocity approaches the speed of light. The theory had to be separated into special relativity, for atomic particles and general relativity for most applications of sufficient mass. A photon traveling at the speed of light obviously does not have infinite mass, as is true for electrons and all the subatomic particles. There must be a difference.

The CERN particle accelerator experiments attempted to more accurately measure the speed of subatomic particles, neutrinos, and unexpectedly, found that their particle was moving faster than the speed of light. Actually, it only appeared to be moving faster than light. The timing of the release and capture of the neutrino was measured via GPS satellites orbiting the Earth. These satellites are moving the escape velocity of Earth’s gravity and due to chance, one of the satellites was moving toward the release point from the capture point at an angle sufficient to cause a Doppler shift in the measurement. That is yet another proof of the theory of Relativity, but which one, Special or General? Perhaps both?
Unlike the astrophysical proof of relativity, the CERN results are much closer to home. Right in our backyard, we can test the theory of relativity any time we wish.

This may be ho hum news for many, but it is pretty exciting if you happen to dabble in theoretical physics. Why? Because relativity is the sum of all barriers, light speed is just the biggest.

The speed of sound is a common barrier. Not just the first speed of sound, but the second speed of sound and probably the third ad infinium to the light speed barrier. That would mean that the theories of General and Special relativity may be combined into the Law of Relativity. That would be an enormous simplification of general physics, ground breaking!

So my excitement over the CERN discovery may be a touch more than the average Joe Six Pack’s excitement. The first point is that this discovery partially validates this simple relationship, dF/dT=4alphaF/T, where F is energy flux, T is temperature in K, and alpha if the relative coefficient of flux in a media. The little d’s being the change of F with respect to the change of T.

By expansion, dF/dT=4(aF+bF+cF-…..+nF)/T, the summation of energy flux allowing for relativistic considerations divided by the initial temperature is equal to the change in F with respect to the change in T, simplification of the Stefan-Boltzmann equation that applies to all energy flux not just electromagnetic energy. That is amazing if physics is one of your hobbies. It could redefine how we understand the big universe and the small universes of atoms. Exciting stuff!

Since this is all new, I will be starting a new series of posts on what is a fascinating subject to me. I started a new lable, Cosmic Puzzles, a while ago and this series of posts will be under that lable.

Atmospheric Phonons - RHC and the Greenhouse Effect

Modeling heat flux exchange between atmospheric boundary layers

The interaction of conductive, convective and radiant heat flux change with density in the atmosphere. That complicates making a simple model that best illustrates the heat exchange between layers. Ideally, the basic model could be used for as many layers as possible so that the changes in the impact of one flux relative to the others would be most apparent.

Using the surface and Tropopause as an example, Flat plates for the surface opposed by a flat plate Tropopause would be a simple illustration for radiant flux, opposing triangles with a broad base at the surface decreasing to a point below the tropopause opposed by a potential energy triangle with the broad base at the potential temperature of the conductive energy transferred to the atmosphere, and the convective with latent would be a column with its width equal to the energy transferred from the surface to the point of condensation which then tapers to a point where water vapor is negligible.

For a RHC model, the three flux models would be combined into what appears to be a cone opposed by a cone, more accurately, a Bucky-mid opposed by a Bucky-mid. A Bucky-mid being a cone with its base shaped like a segment of a Bucky ball. Two dimensionally, a triangle would have to do.

Because of the interaction, the flat plates would not be very descriptive. The three dimensional model would be a Bucky ball core centered in a Bucky ball sphere. The surface base for each flux would be the same, but the area of the Tropopause Bucky segment would vary for each flux.



The drawing attempts to show in two dimensions, how the sum of the three energy fluxes shift from mixed flux to nearly pure radiant flux. The area of each flux showing the amount of work performed to create the potential energy of the atmosphere, the atmospheric effect.



Deftly erasing the individual flux representations  The opposing triangles represent the surface net flux which is opposed by the atmospheric effect.

Energy is converted from kinetic to potential with the typical loss of efficiency expected when work is performed, Thermo 101.

Visualizing the net effect with efficiency loss is easy. Understanding why radiant energy flux has to obey the basic laws of thermodynamics appears to not be so easy for many of my readers. This explanation starts with, “Nearly perfect does not equal perfection.”

The inverse square law of wave propagation is alive and well in physics. The concave shape of the atmosphere relative the convex shape of the surface does not indicate that infrared radiant heat flux can be focused. Visualizing the radiant energy of the atmosphere as a point source of energy at the average altitude of its origin is a more representative expression of how its impact on the surface decreases with distance from the source of the energy and the target or sink for that energy. If we could focus infrared radiation, our energy worries would be over. That difference between short wave and long wave electromagnetic radiation should be a clue to some misinterpreting the atmospheric effect.

This partially illustrates why the assumption of perfect energy transfer from the upper troposphere to the surface is incorrect. Unfortunately, that is a common assumption in the Greenhouse Effect Theory.

The much more interesting part is the interaction of the three flux members. At the surface, opacity is very high, there is little if any direct radiant transfer from the surface to the top of the atmosphere. GHG molecules can absorb surface energy, but the timing for pure emission is much too long, so collisional transfer dominates the cooling of the GHG molecules.

This is well known, what appears to be new, is that this transfer also involves work with enough loss of efficiency to not be negligible. Simple stated, that Kirchoff’s law needs a little tweaking in a gray body application. i.e. energy in to a layer is equal to the energy out minus plus entropy, since work is performed at less than 100% efficiency.

Very simple concept, there is no free lunch in energy transfer. The fun part is figuring out the entropy for radiant heat transfer in a mixed gas environment with changing density and composition of the gases.

In modern physics, quantum mechanics would be used to describe the probability density of the photons by their relative motions and energies. a touch complex, but doable. I classic physics, relativity would be used to simplify the complexities addressed by quantum physics. That is where the Relativistic Heat Conduction comes into the picture.
From Wikipedia, since that is one of the few sources I have at my disposal,

. The main features of RHC are:

1. It admits a finite speed of heat propagation, and allows for relativistic effects when heat flux transients approach that speed.
2. It removes the possibility of paradoxical situations that may violate the second law of thermodynamics.
3. It, implicitly, admits the wave–particle duality of the heat-carrying “phonon”.

The phonon distinction, http://en.wikipedia.org/wiki/Phonon is interesting. While it applies to solids and some liquids, the results of the Kimoto equation suggest that it may also apply to gases. I find that interesting. One of the criticisms of RHC is that, “4.The equivalence of relativity and the second law is shocking, because it implies that one of them can be a derivative of the other.” Imagine that?

Note: What I thought was a simple explaination is turning into a book. There has been a great deal of research done on RHC and I am sure I am wasting time describing what has been much more effectively communicated by others. I am a little curious how well my simple observations jive with current research that I do not have access to at the moment.

Tuesday, October 18, 2011

What the Heck is Effective Emissivity?

I am still working on the details, but it is the restriction to light flow through a medium and it is starting to look like any medium. Very interesting.

While this is still theoretical, it appears that the vacuum of space is not resistance free to at least low energy photons. Not much, but a little, which is enough to figure out what it is approximately.

Since even photons have mass, it is not unrealistic to believe that the mass of a photon may increase as its energy decreases. If that is the case, then the radiant part of the Relativistic Heat Conduction (RHC)equation is much easier to determine. That is a very cool thing!

The mass though doesn't have to be determined directly. The frequency and wavelength of photons are subject to change when there is interaction with mass. Short wave absorbed becomes long wave radiated. Long wave at one wavelength can become long wave at another wave length.

For CO2, the absorption and emission at 14.7 microns is the big picture, but conductive interaction can change the picture to the smaller side spectra. The mass encountered can add its on spectra to the picture.
We end up with a picture out of focus in mixed gas environments which is probably all environments to a degree.

Space is nearly perfect for radiant enery transport with the exception of the inverse square law, the cone of energy expands with the square of its distance from the source to the sink. Nearly perfect is far from true perfection. While it would be hard to measure, especially if you were not looking for it, interaction with dust and possibly even other low energy photons could create an Effective resistance to flow, "Effective" emissivity.

In the Kimoto equation I have used the terms conductivity, convectivity and emissivity as the related impedances to conductive, convective and radiant heat flows. Latent heat is lumped in with convective, as it should be, but there is a sensible component to latent heat which should not be ignored in convective calculations.

With a well described initial condition, conductivity, convectivity and emissivity, in the sense of effective emissivity, which varies with density, can be approximated. Small state changes allow the approximations to be extended, allowing a more detailed description of the change in each value with density, temperature and changes in gas composition. Pretty difficult to solve from the basics, but not that difficult to estimate.

At the surface, my first estimate of emissivity was 0.850. Which should have been close. But the best estimate is 0.825, why?

Possibly that is the effective emissivity of space to low energy photons. Most measurements of energy of stars, etc. have small notches where the measured spectrum deviates from the classical calculations. Rayleigh-Jeans equations work well for low energy but suffer from the Ultraviolet catastrophy. Stefan-Boltzmann works well for higher temperature objects, but just doesn't cut it for lower temperature objects. The Planck equation falls in between. Things change with energy and mass. Pretty simple concept.

Does that change mean that the RHC equation is doable? Not really, but it appears to have at least one NEW niche, low energy photons in a mixed gase environment. From that start, who know what can follow?

I added the bold new above because it was one of the more important things missing. RHC has applications, mainly in plasmas. That would make most think that it would not apply to the low temperatures and energies of the atmosphere. The only reason it seems to apply to the atmopshere is the magnitude of the total energy transfered and the large number of thermal gradients. That's my theory and I am sticking to it :)

I would not have noticed a relationship looking at any parts of the data, but as a whole, it is noticable and then as major segments of the atmopshere, northern extent, southern extent and tropics it is also noticable, once you are sensitive to what you are looking for themodynacially.

The differences in the northern and southern responses are most obvious and appear to be explained by the emissive and conductive relationship. The tropopause regulation potential most noticable in the tropics and near tropics and appear to be explainable with the conductive/latent to radiative transistions.

Explaining the tropopause regulation may be nearly impossible. The analogy to a radio antennea ground plane is pretty good. Using a ball over sphere model showing the inverse square propogation of the upper point source or ball on a much large spherical surface is helpful as well, but neither really come close to a proper visual aide. It seems that many may picture a lower point source with a concave outer sphere focusing the back radiation, which is opposite the actual effect. I am not possitive why it is so difficult to explain with simple geometry why down welling longwave has to obey the inverse square relationship. Some think I am a lunatic just for believing that energy transfer cannot be 100% efficient. That is truly odd!

I would prefer being called a lunatic for more sophisticated reasons, like believing conductive heat flux never should have been considered negliable. I mean, that did surprise me. Had it not at least offered some explanation for the Antarctic's refusal to warm as predicted I would not have pursued this theory.

My limited acceptance of the absolute value of the S-B or Rayleigh-Janes or Kirschoff's is a reasonable grounds for calling me a lunatic. Still we are only looking at a possible 1% change in radiative forcing which is easily offset by the tails in nearly any spectrum of any element in the atmosphere. What appears to be the case in the atmosphere is only fraction of a percent uncertainty in the classical equations for what is admittedly a special case. I really don't see the issue there, especially with the relativistic motion of the photons with changing density.

Perhaps I am just a lunatic for thinking what is accepted, but obviously not working, should be questioned. That's no fun. If it is wrong and getting worse, it should be questioned.

Anyway, the poor drawing seems to be a pretty good representation of what is happening in the tropopause. Those interested can check the temperature profiles of the tropopause in the lattitude 20 to 40 ranges to see temperature can decrease by nearly 50C in short time periods. That is a much more rapid response time than the stratospheric temperature change. It is all in the rates of the rate of change.

What appears to be happening is much more interesting than what was predicted to happen. Man can alter climate, only not as was once thought.

Sci-Fi and the Tropopause Heat Sink

I have been goofing around attempting to write a Science Fiction novel for a year or so. In my society of the future, the denizens would have to have dealt with today's issues to progress to what my vision of the future would be. I was never satisfied with how the Global Warming thing worked out. So I had to work on a Coming Ice Age Scenario or some wonderful technological magic. Fantastical technology is a bit over done in sci-fi, so I was thinking, a combination of nature and technology stumbling to a compromise.

That's what started me reading up on the Global Warming stuff. You need a few inept scientific characters for a humorous aside in a good novel, where better to look?

The Tropopause heat sink was something that looked totally plausable. The Trop does neat stuff. All the drawings in the encyclopedias have these neat and tidy lines showing a flat temperature profile. That's kinda weird, so I needed a little imagination to figure out how weird to make it.

How's this;


Okay, it's a low budget Sci-fi visual..




The flat sides where there is no change in temperature in the Tropopause represent a region of constant net energy flux. When there is a change in up welling flux, the temperature decreases to allow more tropopause relief (the light blue triangles). When the flux decreases the length of the constant flux lengthens to oppose the reduction. An energy flux variable venturi. That sounds pretty Sci-Fi-ish.

What happens is that the little e below the venturi remains pretty constant. Conductive and latent flux increases which tends to increase the little e on top of the venturi. Excess energy is forced out the side spectral windows of the venturi, relieving energy and decreasing the temperature, which narrows the width of the venturi.

Then I was going to explain how the increased percentage of conductive flux below the tropopause tended to smir the radiant spectra because of the relative motion of the photons banging around more than normal. You've got to have some reference to relativity, special or otherwise, in a good sci-fi novel even though no one really understands that stuff. The poor scientist that discovered the relationship had to prove how valuable Antarctica was to the climate environment, to save Earth. Nasty corporation types where planning on developing the vast southern continent. Corporate types make great villains.

Of course, this same relativity thing was how I was going to get the space ship's pulse fusion drive to near light so it could, dark matter lens, to over the apparent speed of light, without the occupants turning into gravity induced spagetti. I thought that may be cooler than the wormhole thing.

Oh, how did future Earth dewelers survive the Coming Ice Age? That was pretty easy.

Monday, October 17, 2011

Could Atmospheric Conductivity Help Regulate Antarctic Temperature?

The “Effective” emissivity of the atmosphere does not have to be large to be significant. The relationship to what is a small value of the thermal conductivity of the atmosphere is what is important. Note: In the table below borrowed from Thermal conductivity of CO2, http://www.engineeringtoolbox.com/carbon-dioxide-d_1000.html that the thermal conductivity of CO2 increases to its maximum near -20 C then decreases. Odd that? Now where in the world would that matter?
While I am sure that the thermal conductivity of the atmosphere is a constant topic of conversation among climate scientists, I have never heard it mentioned except when I asked about its impact.

So when I get around to it, I will attempt to fine tune the Kimoto equation, for now, I am comfortable with my preliminary results.
Temperature
- T -
(oC) Density
- ρ -
(kg/m3) Specific Heat Capacity
- cp -
(103 J/kg K) Thermal Conductivity
- k -
(W/m K) Kinematic Viscosity
- ν -
(10-6 m2/s) Prandtl Number
- Pr -
-50 1156 1.84 0.086 0.119 2.96
-40 1118 1.88 0.101 0.118 2.46
-30 1077 1.97 0.112 0.117 2.22
-20 1032 2.05 0.115 0.115 2.12
-10 983 2.18 0.110 0.113 2.20
0 927 2.47 0.105 0.108 2.38
10 860 3.14 0.097 0.101 2.80
20 773 5.0 0.087 0.091 4.10
30 598 36.4 0.070 0.080 28.7

Sunday, October 16, 2011

The Relative Motion of Low Energy Photons in a Mixed Gas Environment

The rate of radiant heat flux in the changing density of the atmosphere changes proportionally with probability distribution of random motion of the photons. Simple and obvious.

This explains why radiant flux in a down ward direction experiences a change in its impedance to flow relative to in an upward direction. Also at higher density, the horizontal motions tend to cancel. At lower density, the horizontal motion is not negliable with respect to the greater impedance down versus the lesser impedance up. The tropospere can behave as an antenna ground plane to radiant energy originating near the top of the troposphere.

This is nothing Earth shattering, but the impact does not appear to be negliable.

The magnitude of this error seems to explain the differences in the Global Climate Models Estimates and the simple calculations from the Kimoto equation.

Now the relationship between mid-tropospheric temperature and stratospheric temperate rates of change provide a better estimate of the value of the effective emissivity at the top of the troposphere, explaining the shift circa 1994 in the relationship.

The mid-troposphere/stratosphere temperature relationship should make a good Watt-meter.

Now all I have to do is prove that, not relativity, for the Kimoto equation's use to be accepted.


Note: I am working on other things, so this is just another note for me.

While N2 and O2 have little absorptivity on the IR spectrum, all it takes is a little to be an impedance to radiant flux attempting to travel at the speed of light. That impedance would change with density which in turn changes with pressure. The sum of the impedance imposed by the individual gases and consentrations would the Effective impedance. The difference in the outbound and inbound effective emissivities would be proportional to the change in density, and inversely related.

This tends to imply that simplifying the variables to temperature, potential temperature and pressure/density should result in an accurate estimate of the change in Effective emissivity. No need to complicate the equation with the Rayleigh-Jeans equations.

What is a Pyrometer Measuring When You Aim it at the Sky?

Temperature based on the infrared spectrum of the device. It i not directly measuring DWLR due to the "Greenhouse" effect, it is measuring temperature which is energy.

Why would it measure about 320Wm-2 or 275K which is 1 degree C? Because there is potential energy in the atmosphere. The weight of the atmophere that is held up against the force of gravity by out going energy, mainly conductive flux assisted by radiant flux leaving the surface to space that is creating the potential energy.

At night does the tropopause fall hundreds of meters? No, it slowly sinks, so slowly there is little change in altitude. The energy flow through the atmosphere changes by nearly two hundred Wm-2 between day and night, more from season to season. Why doesn't the altitude of the tropopause constantly move up and down with the change? Because the tropopause regulates the flow of energy by changing temperature. The Tropopause can change by more than 30C faster than its altitude can change. This is because conductive flux from the surface maintains the lapse rate along with radiant energy interacting with water vapor.

In the day, solar enrgy is absorbed both at the surface and in the atmosphere. The average ratio is 70 atmosphere/170 surface. This average ratio, 0.41 times the surface flux is 160Wm-2. Which happens to be approximately atmospheric effect at the top of the troposphere. That is why the atmopheric effect is roughly in equilibrium. Clouds, Greenhouse gases, dust can change that equilibrium ratio. Latent flux change attempts to balance changes in that equilibrium.


The change in solar cycles change the ratio. High energy short wave, UV changes more than low energy near infrared. It is a push versus pull effect on the lapse rate. The surface convection pushing, the upper troposphere convection pulling. That amplifies the solar change slightly.

The Pyrometer or infrared thermometer is measuring the net down welling energy of all this dynamic energy transfer. A large portion of which is the response to the conductive flux, the potential energy of the atmophere. You could measure at the surface and subtract the temperature at the end of the lapse rate. Why bother? You have the temperature at the surface and the temperature at the top end of the lapse rate, calculate the DWLR. It is about 288-(-28C)or 288K-246K = 42K on average.

The Earth's atmosphere is in a remarkable balance of competing energy flux effects. It is easy to think you are measuring one, when you are in fact measuring several.

What is the significance of the 42K? It would be the approximate change in surface temperature due to the "Greenhouse" gas portion of the atmospheric effect. Remember, latent flux cools the surface.

If that is the case? 216/42=5.16 Wm-2/K is the climate sensitivity at the top of the tropopause and 216/33=6.54Wm-2/K the sensitivity at the surface. There is an inverse relationship between energy at the surface and energy at the top of the tropopause. A doubling of CO2, if it equals 3.7Wm-2 of forcing, would produce 3.7/6.54=0.8 degrees at the surface. At the top of the troposphere, 5.16/3.7=1.4 degrees at the top of the troposphere. Where the change in forcing is felt is very important to know. If you consider the conductive and latent flux response, the ratio changes slightly.

Saturday, October 15, 2011

I am Still Getting Flack over the Value of Down Welling Radiation!

It seems that some people believe that there is no Down Welling Longwave Radiation (DWLR)or it is twice what it should be.

Here's the pooh. Yes, there is DWLR. Always has been. Always will be, if we have an atmosphere.


In the drawing by Kiehl and Trenberth, both the total flux for a black body at 288K and its calculate energy flux 390Wm-2, and the conductive plus latent heat fluxes are shown. Everything appears to balance. There is a difference between the radiant energy from the surface, absorbed by the atmosphere when compared to the NASA drawing. Approximately 24Wm-2 is the difference.

This is or at least should be common knowledge. My question was why?

If you turn off the sun, the conductive and latent fluxes do not stop as if by magic. The total outgoing energy will be equal to conductive (thermals in K&T's case) plus convective (latent in this case with a sensible component) and radiant. The total of all there will be 390Wm-2 initially.

390-24(conductive)-79 (convective)= radiant from the surface or 287 radiant. Got that?

Only the 287 is subject to the "Greenhouse" effect at the surface. The "greenhouse" effect cannot be greater than the energy effected. Yes, the total can be derived from the full 390 from the surface. 390 surface minus 240 top of the atmosphere is 160 Wm-2. The common value of the "greenhouse" effect is 155Wm-2, so it is a little different because the drawings are not exact in every way.

The 155Wm-2 is at the top of the atmosphere. 287-155=132 is the value at the surface. Notice the difference? Those two numbers give the ratio 155/132=1.17, 1.17*155=182. So the simplest estimate of what energy flux would produce 155Wm-2 at the TOA is 182Wm-2. No energy flow gets a free ride, the is always an energy loss in transmission. We live on a sphere and there is entropy.

The tropopause is a neat part of the atmosphere where the temperature is colder than any other place on or above the surface of Earth other than space. This is where the latent heat flux releases its heat eventually. That is up to 79 Wm-2 released directly to the tropopause. The absolute maximum energy of the "Greenhouse" effect could be 182+79=261Wm-2. But we know energy must be conserved, it would never be perfectly transfered to the tropopause. What may it be then? 240Wm-2, the "greenhouse" effect cannot manufacture energy, only retain energy, and that at a loss, entropy remember?

The "Greenhouse" effect due to a surface temperature averaging 288K cannot be greater than 240Wm-2 for our planet. If you add, 170Wm-2 solar absorbed by the surface to 390Wm-2 to 560Wm-2. Why would I use 560Wm-2 to determine the "greenhouse" effect of a planet at 288K emitting 390Wm-2 on average? I would not.

The "Greenhouse" effect is the radiative portion of the atmospheric effect, which just happens to be ~220Wm-2 measured at the surface, 155-160Wm-2 at the top of the atmosphere and 132-155 measured at the tropopause. Sorry life on Earth is not linear. The 321Wm-2 are the combination of conductive and radiant energy. With no "Greenhouse effect there would still be conduction. That's just the way it is.

So how much conduction? How much latent? How much non Greenhouse gas radiant? That's what I am working on, not some vision of perpetual motion caused by a silly cartoon with an incorrect number.

It appears that the models that generated that incorrect number are also generating an incorrect value of the "Greenhouse" effect. How much? About 10% +/- 8% more. That is all. A meager 10% that may mean a lot in the overall scheme of things.

Update: So why not use the 390 and 24? Or 390 and 79? Short answer, that's not the atmospheric effect. Look at it this way, 390W,-2 at the surface and 240Wm-2 at the TOA, is the atmopsheric effect. That's the TOA not the troposphere. That number asumes that the no GHG Earth was 255K or 33C cooler than now. That is assuming a lot. What would it be? By my calculations, 390-216=174Wm-2 or 235C at the surface and 255K at the TOA. The Earth would be 20 cooler because of latent cooling if there were no radiant flux interaction with the atmophere at all. But the actual temperature was 255K at the surface +/- 3 degrees, the possible error assuming 30% albedo which includes clouds and white ice. Would a frozen Earth with no atmosphere have clouds and snow? I don't think so. Latent energy cools the surface and warms the troposphere, conductive energy warms the surface and the atmosphere, radiant heat both cools the surface, warms the lower atmosphere and cools the upper troposphere. The net effect at the surface is more than 155Wm-2, but it is not 321Wm-2. Any value of change in flux that gives you the exact 33C includes all heat flux not just radiant absorption. You have to assume that conduction and latent heats do not exist to get than answer. Do they?

The fun part for me is that the silly Kimoto equation that I used, just to see if it may be valid, seems to be. If it is, it indicates some neat stuff. That the Earth environmental data collected for the global warming issue, may be accurate enough to provide some insight into relativistic Heat flow. One of accidental things that happens we you spend billions on research, you learn something new, something unexpected. Could it all be a bunch of crap? You betcha! But so far it just keeps showing promise. Fun stuff!

Friday, October 14, 2011

What is The 4C Thermal Boundary?

Update: The http://www.technologyreview.com/blog/arxiv/27260/ Cern study in Switzerland, realizes that the speed of light is a real barrier. That's a good thing. That would mean that the preception of the speed of photons in a media changes, the relative speed is what is important not the actually speed. That makes life a lot simpler. That makes the calculation of the variable for randiant flux in a mixed gas environment make sense, without having to redefine solid physics. The change in the rate of change is all that is required, not the change of the speed of light. Kinda blows my dark energy theory back, but improves the probability of the Kimoto equation solution.


Again with the high quality graphics, the Temperatures of Earth, (yes, I know there is a degree or so off here or there.)


Note: This is a work sheet I am leaving public. I know most of this has been done before, I am just using a different frame of reference to attempt to better define the variables.

RHC, Relativistic Heat Conduction, is not required to solve any particular thermodynamic problem, but it does simplify solution of complex problems cover millions of years of heat transfer.


d{dF}t/d{dT)t or the change of the change in flux per the change of change in temperature, both with respect to time. It is like all energy flow wants to accelerate, but may be limited by its media of transport. Light appears to have a mass because it cannot accelerate beyond the speed of light because space is not a perfect media.

Note: Light having a mass is a bone of contention with some. The way I look at it, as the energy of a photon increases, its mass is converted into energy, when the energy decreases, the mass increases as energy is converted to mass. E=MC^2 and all that. So my hypothesis is that a photon is an assembly of sub atomic particles, each with a specific quantum of energy and the equivalent of shells for orbits. The combination of possible orbital occupations would be the quantum energy of the photon. Interaction with electrons in matter produces the Phonon effect. The Phonon is the missing element in the RHC equation for the atmosphere. It really should be simple.

Would this play hell with Coulomb's Law? I don't think so. It would tend to more firmly relate fields. Not a bad thing. It should not be too hard to figure out what the basic quatum is?
http://youtube/tEL3Amxf8eI

So a photon may have 1x10^35 quantum states from relativistic masses of 2.21x10-42kg to 2.21x10-7kg. That's just rough estimate of course. The mass of an electron is 9.10938x10^-31kg. Since the relative mass of a photon is obviously not going to be equal to that of an electron, angular momentum, gravity and charge would have to be allowed for to determine the effective rest mass of the photon, which is likely on the order of 2.21x10^-42kg. The overlap of potential relative masses would indicate possible interaction in a mixed gas environment, but there is work still to be done.


Unfortunately, it appears that the appearent mass of a photon has to approach zero, from its all ready near infintesimally small mass, for this to work. That would mean the speed of light is a relative constant, it would approach an infinity. A little scary when you look at it as a whole, but it makes sense. Whether this totally agrees with the concept of VLS, I don't know yet.


Most understand the simple heat transfer barriers, insulation, gas to liquid contact, optics and radiant energy. RHC just defines all heat transfer in terms of time scales (changes in rates of change may be better). It is a simplification, probably not an ultimate solution, but a step in that direction.

I will be working on this from time to time to define simple RHC boundaries. One of the more interesting is the deep ocean 4C barrier. This is density barrier, above 4C sea water density varies with energy flow. Below the 4C barrier, temperature is relatively constant as heat flow is slow, tens of millinia and mirco Watts per meter squared. The effect is the appearance of near perfect conduction of heat, thermal equilibrium on a much longer time scale, that is a much tigher, denser, probability cloud, i,e, if it is easier to locate a packet of energy, its rate of change is less.



What is The 4C Thermal Boundary?


Selecting a frame of reference is more than just a choice of a point in space, it is a choice of space and time. The 4C boundary in the deep ocean is the point of maximum density of our saline ocean. From this boundary upward is the ocean atmopshere mixing layer. Heat is transfer is much greater speed than below the 4C layer. Below the 4C layer is the ocean crust mixing layer. Its heat transfer time scale is on the order of tens of millinia.

This is analogous to the tropopause, we the rate of heat transfer can be much greater than the rate of transfer of energy from the surface mixing layer to the tropopause.

Most studies of thermodynamic full cover thse issues with coefficients of heat transfer across a thermal barrier. Part of the description of the coefficient of heat transfer is the time restraints, but in normal applications, the time constraints can be simplified. In studying the Earth system, these time constrains are only negligable between one boundary per estimate. The relative impacts of the time constraints between two or more thermal boundariers has to be consider for correct estimates.

4C boundary time constant tens of millinia

upper ocean time constant roughly a millinium


This layer is from the 4C density boundary to the surface. There are several layers in this layer. The 100m layer, defined by shorter short wave radiant energy, green to ultraviolet, and the 10 meter layer defined by the longer short wave energy, yellow moving to near infrared, and the skin layers, millimeters to micro meters.

surface air mixing layer time constant roughly months


This is the most interesting boundary to me. Radiant heat from the surface of the oceans is limited by the coefficient of heat transfer from water to air. Changes in wind change the rate of flow. Changes in density change the flow. Changes in the composition of the gases change the flow. Once radiant energy is transferred, the photons enter a supercharge version of nature's pinball machine from Hell. Greenhouse gases can readily absorb photions but the much higher rate of collisional heat transfer to emission by relaxation is phenominal. Conductive heat transfer is more coherent in the direction of temperature drop while emissions are totally random. The absorption the collision de-excitation can enhance conductivity like crazy. The wavelength of the photon from one form of de-excitation can change in nanoseconds. Each of these relaxations that change wavelength can be less efficient than the last or more efficient than the next. This is thermal chaos! Going from near zero to light speed and back billions of times in fractions of seconds.

This is the main reason for my considering relativistic heat conduction and the possibility of variable light speed. While the speed of light may not vary, it would definitly appear to vary. It is like the doppler effect on steroids. Yeah, I think it is kind of exciting :)

The probability density approach is the only way to come close to solving this layer. Once that is estimated, the probability density changes with density of the media, ratio of the mixed gases and the rate of temperature decrease with density. This is where the Kimoto equation becomes a major tool to simplify the calculations. Using surface temperature, potential temperature and "effective" emissivity, a reasonable approximation may be possible. Approximation though, is all it will ever be. There is no equilibrium, just probability density.


Tropopause time constant roughly microsecs


In the Tropopause, you have up welling infrared with downwelling infrared with incoming solar at wide angles and scattered/reflected solar from below. A lot of different fluxes from all directions. The spectral window is very clear for some wavelengths and opaque for others. This is the one spot in the atmosphere that a sandard radiative model could get totally lost. More ice, water vapor or water would play hell in getting a good number. All it takes is a few molecules combined to change the radiant spectrum of the small amounts of water. Measurement of the changes would be complicated becaue of the available angles of emission and absorption not inline with the instrumentation. Getting it close is quite a feat. This is were RHC can really come in handy. The complicate relationships of heat flux can be simplified to temperature, pressure, potential temperature, which is a function of temperature and pressure, to get an estimate of the "effective" emissivity*. That may sound complicated, but the change in temperature with altitude is a direct indication of the net flux. Change in the rate of change of temperature is an indication of the magnitude and sign of the net flux. So the temperature relationship between the mid-troposphere temperature and the lower stratosphere can give you and indication of the flux relationships in the tropopause.

Now that the weird energy changes and actual change in the speed of light are ruled out, the relative motion of the of the photons is the impedance to radiant flux, which changes with density. Near the tropopause the side windows are more open, which expalins the dramatic changes possible in the tropopause. So it will be much easier to explain why the change in rate of change has such an impact. Resistance to flow through the stratosphere changes slowly with respect to the side windows, allowing radiant flux relief, if you will, for larger changes in flux from the surface.

What does this mean for the mid-tropo/strat Watt-meter? That larger areas of the stratopshere will be needed to get the full signal of the change in flux from the surface.


* At the surface, emissivity of the surface times transmittance of atmosphere times the change in transmittance with respect to density. Roughly at this point in the calculations.

The confusing part is the "Effective" emissivity. In a straight line, electromagnetic radiation would follow all the classic rules. The mixture of wavelengths, energies and angles appears to be simplified as a single "effective" unit, with this special case of RHC. How accurately, I am still working on that. It looks pretty close and would be closer with two accurate estimates at two different denisities. The more points you get right, the more you can get right.

In any case, the change in the rate of change is very important.

TOA time constant with space roughly nano seconds

As at the tropopause, but much simpler. With less banging around, the photons are more coherent. There is still some, "resistance" to flow so there will be a change in the rate of change of interaction entering relatively constant emissivity of space. Space still has a "resistance" so there is still entropy. So there is roughly an order of magnitude change in the rate of collisions between the stratosphere and space.

This is the concept of Relativistic Heat Conduction, as I see it, the change in the rate of change for all forms of energy is related to entropy, so all forms of energy flux share common transmission properties.

Thursday, October 13, 2011

The Mysterious Case of the Missing Heat

In the discussion or diatribe on the Kiehl and Trenberth missing heat it appears a good portion of the heat was found. Not all the heat though, that's bugging me.

Nature is pretty simple in a complex way. Yes, that sounds like a contradiction, but it is our understanding that is insufficient, we over think simple and under think complex.

Which is where I am now. The inverse square law is pretty common in nature. That's why the triangles on the drawing are shaped the way they are, that's why it is a part of so many equations. The Stefan-Boltzmann equation is an example of the inverse square law, F=simga(T)^4 is F=simga(T^2)^2 The full derivative of F=simga(T)^4 would be dF/dT= 4*A*sigma(T)^3 + C(T)^2 +CT + D. The full equation for determining Specific Enthalpy is the exact form with different coefficients. They are related or relative in nature. That is Relativistic Heat Conduction, simple, but complex.

In normal day to day calculations, the dominate order of the equation can be used and the remainder assigned a constant and coefficients to adjust the numbers.

In a lot of ways the full equation describes four dimensions, zeroth, 1st, 2nd and 3rd.

Nature can throw a curve ball where the second order terms are in sync or 180 out and make the simplified assumption invalid. That appears to be the case with lots of missing things now that we have better ways of measuring our world and our universe. What was once adequate may no longer be. Finding classical solutions and comparing them to observed data is science, not assuming anything is ever 100% correct.

That is where I am. The coefficients I resolved for one condition are not the same in all others. Time to find the second order effects, not throw away the first order results, refine them.

How you look at the problem makes all the difference. Would 2(F/?)^2=(2*(simga)^1/2(T^2))^2, unlikely, but always something that should be kept in mind. Things can be over simplified. For that form to work, F would need a new coefficient.

These are idle ramblings of course, the full equation is the proper place to start.

Wednesday, October 12, 2011

Orphan Photons? Are They Dark Energy?

Pondering is much more fun than calculating. The acceleration of our expanding universe is one of those things that only comes around once in a few lifetimes. The perfect thing for a guy with too much time on his hands to ponder.

When a photon with sufficient energy scores a bulls eye on an unsuspecting molecule something happens. In a CO2 laser, energetic photons are bashed into unsuspecting nitrogen molecules to amplify the light. Light amplified through stimulated emission of radiation, LASER. Lasers are cool!

But in the LASER, that energetic photon can find a new ride. Out in space, hitch hikers need a real big thumb. If the energized photon is released by the bulls eye and can't find a dance partner, it may just sulk around a while and just give up. That is until another photon scores a bulls eye on the wall flower decayed photon. Talk about a small chance in hell, but what the hey, the universe has plenty of time.

If the little piece of dark matter keeps getting in the way of those photons, before you know it, it can be somebody. Maybe even an electron, which still a long way from a proton, but getting closer.

Electrons are clingy little rascals. While they may prefer a fat proton for a dance partner, they know the other side of the aisle. Yep, that's right, they are into the dark side.

As electrons, the perverts can get together a lot easier. If they happen to hit just the right mass with just the right dance partner, you could have a hydrogen molecule. If they pass up on becoming a fat broad, they can hit another perfect atomic mass. Seems like every thing in nature is about hitting the lotto, the right number at the right time.

Once you gain so much mass, it is hard getting a date, just ask any fat guy. Then it gets too hard to give up the dark side and back into the light. Big enough groups of fat guys can generate a little gravitational anomaly. Gravitational anomalies are real clingy. If you ain't careful, they can black hole ya! You don't want to get black holed, it ain't pretty.

So since a black hole is supposed to have infinite mass, how can there be more than one black hole? I guess everything is relative, even infinity. Bet that plays hell with a constant straight guy like the speed of light. Kinda makes multi-verse a little more acceptable as a theory.

A Little Help Please. Global Average Surface Pressure Change

One of the most overlooked variables that relates to climate change is the surface conductivity of the atmosphere. It is over look because for all intents and purposes, it appears to be negligible. I think it probably is, but the equation seems to think other wise.

CO2 and CH4 improves the conductivity of air. Small improvement, but we are only looking at small changes, average air temperature changes conductivity. Average surface pressure changes conductivity. How much combined change is required to be significant?

In a warming world, the increased temperature decreases conductivity increasing warming. The increased warming increases latent convection increasing cooling. A reasonable counter balance of effects that regulate temperature. With CO2 and CH4 improving conductivity, surface warming would be less amplified by increased surface temperature, dampening one part of the temperature regulator. That should lead to a more stable temperature range, however, natural cooling cycles, solar plus the internal natural variability, could tend to increase the rate of cooling as the surface cools. Not a very good change in the feedback controls.

So CO2 could lead to a warmer stable climate or a wicked shift to a much colder climate, possibly a new glacial period. The Glacial period appears unlikely as does the stable climate, that leaves more wicked climate variability.

A reconstruction of the average sea level pressure of the past few decades may provide some insight into the future. I cannot locate such a product on the internet. Anyone know if such a product exists and possibly where?

Determing How Wrong I May Be

While I am fine tuning my spread sheet to better estimate the values of the coefficients, I have been getting correspondence from someone trying to help me disprove myself. In case you want to join the fray, here is my latest response;



True, For the Earth and atmosphere as it now exists

Surface 390Wm-2 @ 288K TOA 238Wm-2 @ 254.5K

Near the tropopause 225K @ 145.329 Wm-2 That decrease in temperature and the flux

associated with that temperature is in effect the Atmospheric Effect.

If you view the change in temperature with the change in altitude, that is in effect the

change in net flux in the atmosphere

For a no atmosphere Earth with albedo = to zero, Ein = Eout, 340Wm-2 indicates a

temperature of 278.3 K.

Earth however does have a wealth of nitrogen and oxygen, while they have minimal

significantly intense spectral lines in the SW and LW spectrum, they do have a coefficient

of heat conduction. With a no greenhouse gas atmosphere, the 278.3K warms the gases

near the surface, causing those gases to expand against gravity. The energy required

to expand those gases would be the no GHG atmospheric effect. Which would create a low, but

existing tropopause.



The combination of surface and atmospheric albedos would supposedly create a planet with 240Wm-2 in

and 240Wm-2 out, the basic model of the no greenhouse gases Earth to calculate the magnitude to the

Greenhouse effect. For the top of the tropopause, that would be a valid model. However, since the

Earth would have a conductive induced tropopause with latent heat transferred from the surface to the

top of the tropopause, the surface temperature would not be 254.5K @ 238Wm-2, that is the conditions at

the tropopause, or TOA for a no GHG Earth.



With cloud albedo estimated at 10% and surface albedo at 20%, 90% of the incoming solar 340Wm-2

would be felt at the would penetrate the cloud cover, 306Wm-2 and 80%, .8 times 306Wm-2 would be

absorbed by the surface. 306Wm-2 * 0.8 = 244.5 Wm-2 which corresponds with at surface temperature

of 256.25K. Small but not insignificant difference from 254.5, as it would be, 1.75/33 = 5.3% of

the warming.

If, cloud albedo is 15%, which I believe quite reasonable, then 15% reflected by clouds would be

340Wm-2 * .85 = 289Wm-2 at the surface of which 85% would be absorb with a surface albedo of 15%

giving 245.65 Absorbed at the surface which would have an equivalent temperature of 256.5K.

Small but still not insignificant relative to 254.5K. The location of the albedo factors matter,

as it is 6% of the total calculate warming.

What my use of the equation is doing is showing an 8% over estimation of warming due to the variably

of the assumption of initial albedo. Which, BTW, happens to be approximately the margin climate

models are currently over estimating current warming.

I would like to fine tune the equation to see what assumption of initial albedo would be correct.

If the equation is correct, there are indications of interesting feedback relationships, which are

currently being published by NASA. http://pubs.giss.nasa.gov/abs/la09300d.html

The data I have glean from the use of the equation so far indicates tropopause and lower stratosphere

ice particle feedback from deep convection that has been here to date underestimated. Dr. Susan Solomon,

has a relatively new paper where the impact of stratospheric water vapor was recently discovered has a

cooling effect. I believe that using the spectrum of ice, instead of water vapor would fine tune

that estimate as it only takes a few molecules of water vapor joined together, to radiate in the ice spectrum.

Again a small but not insignificant impact.

If you now consider that a 5% error in temperature results in a 20% error in flux value, you will see why I am a little interested in this pseudoscience. :)

It may be nothing of course, however, the results are interesting thus far.

Thanks for your patience Lynx-Fox


Yes, there is not a lot off between estimates, but when evaluating a 1% change a 5% potential error is significant.

Relativistic Conduction of Heat

If the Earth had no albedo and a constant diurnal average incoming 340Wm-2 solar input, its temperature would be approximately equal to the 278.3K degrees, using the S-B equation assuming it is a perfect black body.

Adding a no GHG atmosphere with a combined surface and and atmospheric albedo of .30, 30% of the incoming solar, 102Wm-2 would be reflected to space with no impact of the effective temperature of the surface. 238Wm-2 would be absorbed by the surface an transmitted to space unaffected by the atmosphere. This is an unrealistic assumption. Conductive and latent heat would still transfer heat through the atmosphere before being radiated to space at the top of the atmosphere. There is no perfect means of transferring energy without a loss to entropy.

The surface temperature would be warmer and the energy converted in transfer through the atmosphere creates potential energy by expanding the atmosphere against gravity.

Latent energy is transferred at a higher efficiency than conductive energy. Latent energy efficiency is related to the pressure decrease with altitude created by the conductive flux efficiency in transferring energy through the mixed gas atmosphere, the dry adiabatic lapse rate. The combined effect is that the surface of the Earth is warmer than the 254.5 degrees indicated by the S-B temperature at 238Wm-2 and less than the 278.3 degrees indicated by the S-B temperature at 340Wm-2 assuming perfect block body at both conditions.

While greenhouse gases amplify, the radiative impacts on the atmosphere, the atmosphere still has an emissivity that changes with the density and optical properties of the molecules in the atmosphere. Emissivity in the atmosphere, decreases as pressure decreases. In space, emissivity has its minimum value where opacity is also at its minimum, space is a very clear optical window, but not perfectly clear. Dark energy in space would not be easily visible, due to the combination of very low emissivity and relative high opacity at it point source.

All the heat fluxes have efficiencies based on dG/dD, dT/dP and dD/dP, where G is gravity, T is temperature in K, D is density and P is pressure in millibar.

Using the Kimoto simplification, dF/dT approximately equal to 4(aFc+bFl+cFr)/T,

Where F is flux in Wm-2, a is a function of dG/dD, b is a function of dT/dP and c is a function of e*dD/dP, where e is a combination of the true emissivity of the surface of the Earth and the initial value of the emissivity of the atmosphere at the surface of the Earth in an upward direction.

Solving for the initial values variables a, b, and c, at surface temperature T=288K, standard average air pressure and gravity, a=0.33, b=1.09 and c=0.825. We should be to determine a reasonable solution for all three Earth conditions in three dimensional space. If all three agree, then time is not a part of the solution. If they do not agree, a fourth dimensional solution would be warranted.

This is where I am at currently. Since I don't latex very well, it is the best description I can give online at the moment.

Dallas

ps https://docs.google.com/spreadsheet/ccc?key=0AqLGErXDPyPFdEotM0RZd1Qzc2N5allBa2s1cWotcGc&rm=full#gid=0

is a link to Google documents spread sheet. The text is not over laying empty cells, which is a pain. Click of each cell to view comments and formula.

Anyone interested can leave their email and I will include access.

Tuesday, October 11, 2011

What is the Gain of the CO2 Control Knob?

When I was playing the the Kimoto equation, the perturbation produced a maximum Greenhouse response of -271 W/m-2 at the surface. When Steven Mosher asked on Dr. Curry's blog about what would be the gain of the CO2 control knob, that got me thinking.

A value that I came up with the approximate Effective emissivity of 0.71 was interesting. I was hoping to get some up/down emissivity data online, but haven't had much luck, so I just quit. The looking down emissivity at the TOA is about 0.61, so I thought of doing some estimating.

0.71*390=276W/m-2 This is a higher value than my estimated atmospheric effect of 220. So that value makes sense, because that would be all radiative at some altitude.

0.61*390=238W/m which should be the forcing at a lower altitude. Don't get crazy, these are just guesstimates for fun.

If, the 276Wm-2 were at the same altitude as with water vapor, the 0.7*276=193, which would indicate warming if the same downward emissivity is assumed.

0.7*238=167 at the surface which would also be some warming as the value at the would be 155Wm-2 in a line drawing, for no temperature change. The difference, 193-167=26 is the for some odd reason the value that Trenberth has for atmospheric absorption of OLR. Which would be reasonable as his OLR from the surface to space is the reason for his miscalculation. The free to space radiation interacts with the atmosphere as shown in the NASA drawings. This is what I consider part of the key issue with the choice of frame of reference. It is easy to miss something swapping your reference around. So if his drawing showed the 321Wm-2 in the tropopause, corrected for the missing ~24Wm-2 his cartoon would have been dead on. 321+24=345Wm-2 tropo Times the 0.71 Emissivity at the tropopause would equal 244 which is close to the maximum value and within the rather large margin of error of my calculations. Or the 321Wm-2 at the tropopause times the Effective emissivity would be 0.71*321=228 which is well within my margin of error.

Since in my opinion, energy must be conserved, the triangles are showing the impact of the down welling opposed by the down welling. He perhaps was inverting the triangles to show 321Wm-2 at the tropopause opposed by 228 at the surface. An alternate description that would be correct, but not illustrative of conservation of energy. In either case, 321 at the surface is incorrect, but there are options with the selection of frame of reference. And trust me, I am looking to find my error, if there is a large one.

Note: While correct, the 216Wm-2 is a number with a value and 321Wm-2 requires manipulation to sense its value. More accurate communication is important for the guys getting paid to do science. I just fish, so who cares?

As I said, don't get crazy, this was just a guesstimate, but I am interested in exactly how he made is small error that gets magnified.

If he would address the issue, it would be interesting. Oh, no water vapor interaction with CO2 would produce more warming in the upper troposphere which would produce more surface warming as the optical window to the surface would be clearer. How much, don't know, but it is the water vapor barrier to down welling long wave that is an important consideration.

Just so people don't have to run around, the initial estimate of GHG forcing, Which is really 155Wm-2, would be felt as ~345Wm-2 at the surface if you assume there was no tropopause. Regardless, of the height of a no GHG tropopause, it would exist, so the initial conditions are assuming a tropopause, if the 30% albedo was included for initial temperature and OLR. The reason my numbers balance better is I adjust for the change the height of the tropopause to maintain a consistent surface frame of reference.

Science as a Contact Sport

What happened to the good old days of science? Well they are back! Full contact in your face science!

Consensus science is powder puff football. Dumbing down so no scientist is left behind. Real science is Australian rules football! Get your nose bloody, take your licks and try to give back better.

Today's scientist mistake the subtlety of the centuries old confrontations. When Angstrom told Arrhenius he was wrong, "Sorry, old boy. You appear to have miscalculated." In today's terms that is equivalent to, "You twit! What planet are you from! Heh, Auburn grad!"

You think a ponderer of his destiny as a member of the master race would take that lying down? No! If he had got the goods on Angstrom he would have been right dead in is face with, "Perhaps you should review your experiment again." Instead, Arrhenius sulked for a decade before grudgingly conceding, "Yes, warming would not be as much." The media was all over Svante "Arrhenius admits error, but does not provide his results!" It was years later before he cried uncle with, "1.6 (2.3) with water vapor."

That's the problem with these master race wimps. Arrhenius should have grown a pair and lashed back at Knut, "I may be off, but so are you!" A classic scientific feign to restore some honor. Then science would have advanced.

That is how it is supposed to work, in your face science! Like Dessler and Spencer. Get dirty and kick some data!

A Point I Missed Explaining Very Well- The Tropopause

Assuming that the TOA is at the surface for a no greenhouse Earth is incorrect. Due to the Conductive flux, the tropopause would be located approximately 3800 meters above the surface. Close, but not at the surface. Since the Earth still has water, it still would have latent cooling. This effect transfers heat from the surface to the infant tropopause. That is the initial condition of the atmosphere as calculate assuming a 255K temperature and 240Wm-2 TOA total energy flux.

Adding the radiative effect of greenhouse gases elevates the Tropopause with the inclusion of the moist adiabatic lapse rate.

If your choice of frame of reference is correct it will work in all other frames. Poor choice of frame of reference results in pondering perpetual motion, dark energies, pseudo scientific phenomena, which is entertaining, but not very scientific.

Why The Estimates were Off and Why I am Moving On.

I know my logic and attempt at math are hard to follow. So I will make a quick explanation why estimates by Trenberth are right but wrong.

The initial estimate for the greenhouse effect includes albedo due to clouds, that defines the frame of reference at the Top of the Atmosphere, but which top?

The tropopause gives one answer, the actual TOA where emissivity is =0.61 another, only the surface is consistent so it has to be used at the frame of reference to determine "Surface" warming, Thermodynamics 101, KISS, FRAME OF REFERENCE, ASSUME.

The "Effective" emissivity is a non-linear function of the atmosphere and the interaction with other flux. Solving for that is fun, explaining the obvious is not.

So I am moving on.

Monday, October 10, 2011

Using the Greenhouse Effect Triangles

Oops Drawing thanks to NASA web site.

Steven Mosher is a smart guy. He calculates that a doubling of CO2 would equal 1.5 degrees of warming. Most people estimate 1.2 degrees for a doubling. The version of the Kimoto equation estimates 0.8 degrees warming.

Using the triangles, the surface warming estimated by the equation is 0.8 which would cause 1.5 degrees warming at the top of the red triangle. For the 1.2 estimate, it depends where that warming actually takes place. If the 1.2 is warming at the top of the triangle it produces 0.86 warming at the surface. If that 1.2 is warming at the surface, it produces 1.7 degrees warming at the top of the triangle. What happens depends on where it happens.

For simplicity, just use 1.4 and 0.7, the ratio of the flux and the direction of the points. Surface times 1.4 ~ top, top times 0.7 ~ surface. Most of the differences in estimates is the choice of the frame of reference.

Dark Energy and our Not Accelerating Expanding Universe

Figuring out why is the fun in life. Things should make sense, except for women, of course, but the universe expanding just don't make much sense.

What appears to be happening is the speed of light is decreasing as dark matter in the universe increases. So what does that mean?

In the Stefan-Boltzman equation there are two constants,5.67e-8 and ~0.926, emissivity. But is that emissivity for a black body space or both? I am thinking both. Since the emissivity of water is ~0.995 and the constant in the S-B equation is 0.926, my first guess would be that the emissivity of the most perfect black body that could exist in nature is approximately that of water. That means that the emissivity of space could be 0.069. Why would that not be zero? Dark matter.

Hum? If the emissivity of space is not perfectly zero, what happens to light passing through space? It interacts with space creating dark energy. Every one knows that there are tiny traces of hydrogen floating around in space. When light interacts with hydrogen, maybe one photon in billions and billions impacts the hydrogen molecule perfectly dead center, the photon, which is also has a mass of 1/billions and billions, creates a different form of energy and that transition is subject to entropy. No energy flow ever gets a free ride because of entropy, not even light.

How would increasing dark energy collapsing into dark matter change our perception of light? My guess is that the speed of light would appear to change. Perhaps the speed of light does change. I'll look into that in the future.

But since no energy ever gets a free ride, the equation makes perfectly good sense to me. What is the temperature of space again? Perhaps it may be worthy of a little closer inspection?

Update: One commenter thinks I am whacked. That is true. But if the expansion of the universe appears to be accelerating, that is kinda thought provoking. Since there a few real constants, why should light be or our perception of light be constant?

Blog Archive