Seuraa 
Viestejä1041
Liittynyt10.5.2014

Kaaosteoria Gleickin kirjan mukaan askarrutti mieltäni. Miksi kirjassa ei otettu huomioon sitä, että esimerkiksi pitkä yö tasoittaa kaaosvoimien syntymistä. Ja onko todella niin?

Voidaanko osoittaa, että luonnolle yö ja sen aiheuttama lämpötilan tasaantuminen on tarpeen kaaoksen estämiseen? Tai että ihmisen elintoiminnoille yö on tarpeen, jotta kaaoottiset toiminnat ihmiskehossa supistuvat minimiin.

Nousi mieleen Venus planeetan erikoisasema. Venuksella päivä kestää pitkään ja on harvoin yö. Siinä mielessä Venus on kaaoksen vertauskuva, että jopa sen pyörimissuunta on eri kuin aurinkokunnan muilla planeetoilla. Raamatun mukaan tulee 2000 vuotta sellaista aikaa, että on pilvimassoja paljon taivaalla. Onko tämänkin yksi keino estää kaaosvoimien täydellistä valtaa maapallolla? Eli onko luonto jatkuvassa taistelussa kaaosvoimia vastaan? Ja vaatiiko tuo taistelu luonnolta toimenpiteitä, jotka ovat kohtuuttoman suuria?

Mikäli on niin, että kaaoksen mahdollisuus on todella niin pienestä kiinni kuin Gleick kirjassaan osoittaa niin tarvitaan jokin systeemi estämään kaaosta. Jos tuo systeemi on yö niin mielestäni se on todella suurikokoinen ja massiivinen järjestely todella pienen asian takia.

Entä miten on kun tavallaan yö ja kylmyys on kaukaisilla planeetoilla vallitseva ominaisuus niin onko noilla kaukaisilla planeetoilla tarvetta siihen mahdollisten kaaostekijöiden takia? Vai perutuuko tuo yön ja jään vallitsevuus pelkästään etäisyyteen auringosta?

Onko esim. Neptunus planeetalla kaaos täydellinen ilman yötä ja kylmyyttä? Entä galaksit? Onko galaksien reunoilla voimakkaita kaaostekijöitä?

Miksi luonto käyttää niin paljon "energiaa" tai oikeammin energiattomuutta kaaoksen torjuntaan?

Onko teillä mietteitä asian tiimoilta? Tämä on nyt kyllä enemmän filosofiaa kuin fysiikkaa?
Mitä etäisemmässä desimaalissa on mahdollista tapahtua pieni muutos niin sitä suuremmalla voimalla luonto reagoi järjestyksen ylläpitämiseksi?

Sivut

Kommentit (18)

Titanic
Seuraa 
Viestejä1041
Liittynyt10.5.2014

Jos oletetaan, että Neptunuksen ratanopeus olisi 5,4778 km/s ja Merkuriuksen ratanopeus olisi 47,8765 km/s ja jos Neptunuksen ratanopeuden neljäs desimaali muuttuisi ysiksi ja Merkuriuksen ratanopeuden toinen desimaali muuttuisi kasiksi niin kummalla seikalla olisi enemmän vaikutusta kaaoksen ja järjestyksen kannalta?

Voidaanko tuollaista seikkaa mietiskellä tieteellisesti tai filosofisesti? 

Keijona
Seuraa 
Viestejä9863
Liittynyt13.3.2015

Epäjärjerjettömyyden lopettamiseksi  kannattaa ensikin luovuttaa ja luopua  järjettömyyden tekemisestä.  Hölmöläiset ne säkeillään valoa pirttihirmuille kantaa.

Rikkaalla riittävästi, köyhä haluaa lisää.

Vierailija

Helmiä sioille, mutta menkööt.

3.1 Level 1: Complete Certainty This is the realm of classical physics, an idealized deterministic world governed by Newton’s laws of motion. All past and future states of the system are determined exactly if initial conditions are fixed and known—nothing is uncertain. Of course, even within physics, this perfectly predictable clockwork universe of Newton, Lagrange, LaPlace, and Hamilton was recognized to have limited validity as quantum mechanics emerged in the early twentieth century. Even within classical physics, the realization that small perturbations in initial conditions can lead to large changes in the subsequent evolution of a dynamical system underscores how idealized and limited this level of description can be in the elusive search for truth. However, it must be acknowledged that much of the observable physical universe does, in fact, lie in this realm of certainty. Newton’s three laws explain a breathtakingly broad span of phenomena—from an apple falling from a tree to the orbits of planets and stars—and has done so in the same manner for more than 10 billion years. In this respect, physics has enjoyed a significant head start when compared to all the other sciences.

Vierailija

3.2 Level 2: Risk without Uncertainty This level of randomness is Knight’s (1921) definition of risk: randomness governed by a known probability distribution for a completely known set of outcomes. At this level, probability theory is a useful analytical framework for risk analysis. Indeed, the modern axiomatic foundations of probability theory—due to Kolmogorov, Wiener, and others—is given precisely in these terms, with a specified sample space and a specified probability measure. No statistical inference is needed, because we know the relevant probability distributions exactly, and while we do not know the outcome of any given wager, we know all the rules and the odds, and no other information relevant to the outcome is hidden. This is life in a hypothetical honest casino, where the rules are transparent and always followed. This situation bears little resemblance to financial markets.

3.3 Level 3: Fully Reducible Uncertainty This is risk with a degree of uncertainty, an uncertainty due to unknown probabilities for a fully enumerated set of outcomes that we presume are still completely known. At this level, classical (frequentist) statistical inference must be added to probability theory as an appropriate tool for analysis. By “fully reducible uncertainty”, we are referring to situations in which randomness can be rendered arbitrarily close to Level-2 uncertainty with sufficiently large amounts of data using the tools of statistical analysis. Fully reducible uncertainty is very much like an honest casino, but one in which the odds are not posted and must therefore be inferred from experience. In broader terms, fully reducible uncertainty describes a world in which a single model generates all outcomes, and this model is parameterized by a finite number of unknown parameters that do not change over time and which can be estimated with an arbitrary degree of precision given enough data. The resemblance to the “scientific method”—at least as it is taught in science classes today—is apparent at this level of uncertainty. One poses a question, develops a hypothesis, formulates a quantitative representation of the hypothesis (i.e., a model), gathers data, analyzes that data to estimate model parameters and errors, and draws a conclusion. Human interactions are often a good deal messier and more nonlinear, and we must entertain a different level of uncertainty before we encompass the domain of economics and finance.

Vierailija

3.4 Level 4: Partially Reducible Uncertainty Continuing our descent into the depths of the unknown, we reach a level of uncertainty that now begins to separate the physical and social sciences, both in philosophy and modelbuilding objectives. By Level-4 or “partially reducible” uncertainty, we are referring to situations in which there is a limit to what we can deduce about the underlying phenomena generating the data. Examples include data-generating processes that exhibit: (1) stochastic or time-varying parameters that vary too frequently to be estimated accurately; (2) nonlinearities too complex to be captured by existing models, techniques, and datasets; (3) nonstationarities and non-ergodicities that render useless the Law of Large Numbers, Central Limit Theorem, and other methods of statistical inference and approximation; and (4) the dependence on relevant but unknown and unknowable conditioning information. Although the laws of probability still operate at this level, there is a non-trivial degree of uncertainty regarding the underlying structures generating the data that cannot be reduced to Level-2 uncertainty, even with an infinite amount of data. Under partially reducible uncertainty, we are in a casino that may or may not be honest, and the rules tend to change from time to time without notice. In this situation, classical statistics may not be as useful as a Bayesian perspective, in which probabilities are no longer tied to relative frequencies of repeated trials, but now represent degrees of belief.

Vierailija

Using Bayesian methods, we have a framework and lexicon with which partial knowledge, prior information, and learning can be represented more formally. Level-4 uncertainty involves “model uncertainty”, not only in the sense that multiple models may be consistent with observation, but also in the deeper sense that more than one model may very well be generating the data. One example is a regime-switching model in which the data are generated by one of two possible probability distributions, and the mechanism that determines which of the two is operative at a given point in time is also stochastic, e.g., a two-state Markov process as in Hamilton (1989, 1990). Of course, in principle, it is always possible to reduce model uncertainty to uncertainty surrounding the parameters of a single all-encompassing “meta-model”, as in the case of a regime-switching process. Whether or not such a reductionist program is useful depends entirely on the complexity of the meta-model and nature of the application. At this level of uncertainty, modeling philosophies and objectives in economics and fi- nance begin to deviate significantly from those of the physical sciences. Physicists believe in the existence of fundamental laws, either implicitly or explicitly, and this belief is often accompanied by a reductionist philosophy that seeks the fewest and simplest building blocks from which a single theory can be built. Even in physics, this is an over-simplification, as one era’s “fundamental laws” eventually reach the boundaries of their domains of validity, only to be supplanted and encompassed by the next era’s “fundamental laws”. The classic example is, of course, Newtonian mechanics becoming a special case of special relativity and quantum mechanics. It is difficult to argue that economists should have the same faith in a fundamental and reductionist program for a description of financial markets (although such faith does persist in some, a manifestation of physics envy). Markets are tools developed by humans for accomplishing certain tasks—not immutable laws of Nature—and are therefore subject to all the vicissitudes and frailties of human behavior. While behavioral regularities do exist, and can be captured to some degree by quantitative methods, they do not exhibit the same level of certainty and predictability as physical laws. Accordingly, model-building in the social sciences should be much less informed by mathematical aesthetics, and much more by pragmatism in the face of partially reducible uncertainty. We must resign ourselves to models with stochastic parameters or multiple regimes that may not embody universal truth, but are merely useful, i.e., they summarize some coarse-grained features of highly complex datasets. While physicists make such compromises routinely, they rarely need to venture down to Level 4, given the predictive power of the vast majority of their models. In this respect, economics may have more in common with biology than physics. As the great mathematician and physicist John von Neumann observed, “If people do not believe that mathematics is simple, it is only because they do not realize how complicated life is”.

Vierailija

3.5 Level 5: Irreducible Uncertainty Irreducible uncertainty is the polite term for a state of total ignorance; ignorance that cannot be remedied by collecting more data, using more sophisticated methods of statistical inference or more powerful computers, or thinking harder and smarter. Such uncertainty is beyond the reach of probabilistic reasoning, statistical inference, and any meaningful quantification. This type of uncertainty is the domain of philosophers and religious leaders, who focus on not only the unknown, but the unknowable. Stated in such stark terms, irreducible uncertainty seems more likely to be the exception rather than the rule. After all, what kinds of phenomena are completely impervious to quantitative analysis, other than the deepest theological conundrums? The usefulness of this concept is precisely in its extremity. By defining a category of uncertainty that cannot be reduced to any quantifiable risk—essentially an admission of intellectual defeat—we force ourselves to stretch our imaginations to their absolute limits before relegating any phenomenon to this level. 3.6 Level ∞: Zen Uncertainty Attempts to understand uncertainty are mere illusions; there is only suffering.

Vierailija

3.7 The Uncertainty Continuum As our sequential exposition of the five levels of uncertainty suggests, whether or not it is possible to model economic interactions quantitatively is not a black-and-white issue, but rather a continuum that depends on the nature of the interactions. In fact, a given phenomenon may contain several levels of uncertainty at once, with some components being completely certain and others irreducibly uncertain. Moreover, each component’s categorization can vary over time as technology advances or as our understanding of the phenomenon deepens. For example, 3,000 years ago solar eclipses were mysterious omens that would have been considered Level-5 uncertainty, but today such events are well understood and can be predicted with complete certainty (Level 1). Therefore, a successful application of quantitative methods to modeling any phenomenon requires a clear understanding of the level of uncertainty involved. In fact, we propose that the failure of quantitative models in economics and finance is almost always attributable to a mismatch between the level of uncertainty and the methods used to model it. In Sections 4–6, we provide concrete illustrations of this hypothesis.

WARNING: Physics Envy May Be Hazardous To Your Wealth! Andrew W. Lo and Mark T. Mueller

http://mitsloan.mit.edu/media/Lo_PhysicsEnvy.pdf

Vierailija

How does Intuition work?

Intuition operates at a "level below logic". It is not unscientific or illogical, it is sub-scientific and sub-logical. Intuition operates on events, not theories.

Some of our senses observe events in the world around us and others observe our own bodies to track things like positions of our limbs and our center of balance. Friedrich Hayek has observed that all sensory information is converted to one single kind of nerve signals before reaching the brain. The brain then processes these incoming nerve signals by sending further nerve signals to other parts of the brain.

We can view all nerve signals as events, no matter what their origin or purpose. Memory allows us to track and remember these signaling events.

We can now start remembering which events precede which other events. Sometimes the former are frequent predictors for the latter, and remembering this correlation would be valuable. Intuition is a process that uses this kind of correlation data to make short-term predictions that are correct often enough to improve our survival.

Evolution has, over millenia, discovered many elegant shortcuts to this primitive brute-force version. So have I, in six years of exploration. Some of these shortcuts are (or will be) described elsewhere, and others will (currently) be discussed only with collaborators.

Note that Intuition makes no attempt to model causality, or create any kind of high level models or theories. That would be using Logic. Intuition simply tracks events. This means it is immune to all the listed problem types in Bizarre Domains that confuse Logic based systems.

http://artificial-intuition.com/intuition.html

jussipussi
Seuraa 
Viestejä40612
Liittynyt6.12.2009

Titanic kirjoitti:

Miksi luonto käyttää niin paljon "energiaa" tai oikeammin energiattomuutta kaaoksen torjuntaan?

Onko teillä mietteitä asian tiimoilta? Tämä on nyt kyllä enemmän filosofiaa kuin fysiikkaa?
Mitä etäisemmässä desimaalissa on mahdollista tapahtua pieni muutos niin sitä suuremmalla voimalla luonto reagoi järjestyksen ylläpitämiseksi?

Tässä on kuuntelemisen ja katsomisen arvoinen. Kyllä se fysiikkaakin on, mahdollisesti.

"What is life-lecture: Jeremy England"

https://www.youtube.com/watch?v=e91D5UAz-f4 .

Keijona
Seuraa 
Viestejä9863
Liittynyt13.3.2015

Keijona kirjoitti:
Järjettömyyden lopettamiseksi  kannattaa ensikin luovuttaa ja luopua  järjettömyyden tekemisestä.  Hölmöläiset ne säkeillään valoa pirttihirmuille kantaa.

"Sinähän tavallaan vastasit jo tekstissäsi kysymykseen. Järjestys on paikallinen käsite."

Pyörä on Pyhä. Ympyröiden piirtely mieleen toimii, ei  alleviivaukset. 

Rikkaalla riittävästi, köyhä haluaa lisää.

Titanic
Seuraa 
Viestejä1041
Liittynyt10.5.2014

Entropiaa sinänsä ei kai voida mitenkään estää?

Kun planeetan toinen puoli, joka on merta, jäätyy pysyy lämpötila, koko ajan samana vaikka miinus yhtenä asteeena ja kuitenkin jäätymisprosessissa vapautuu lämpöenergiaa tuon meren massa m kerottuna 333 kJ/kg.

Luulisi, että meren jäätyminen pelkästään lisää järjestystä, mutta taitaa siinäkin käydä niin, että entropia lisääntyy?

 https://fi.wikipedia.org/wiki/J%C3%A4hmettyminen

https://fi.wikipedia.org/wiki/Entropia

https://fi.wikipedia.org/wiki/Suprajohde

Kun Intialaisessa filosofiassa puhutaan AINEELLISESTA ENERGIASTA tarkoitetaan ilmeisesti mittabosoneita: Fotoni, Gluoni ja W/Z , mutta näiden lisäksi on olemassa eräänlainen tuonpuoleinen, jonka itse käsitän ENERGISEKSI AINEEKSI.  

Tiedemiehet puhuvat sen sijaan Higgsin bosonista ja kentästä.

Mutta voiko tuo energinen aine olla yksinkertaisesti vain sama kuin magnetismi tai magneettikenttä? Sillä kun lämpötila kasvaa niin kaiketi jossakin pisteessä alkaa magneettikenttä laajeta, jotta ei demagnetoituisi eli katoaisi. Jotain yhteyttä on lämötilan kasvulla ja magneettikentän laajuudella?

Tähän ketjuun ottasin mielelläni kommentteja koskien entropiaa, vaikka mitään manifestoitua ei voidakaan determinismin takia varmaan esittää. Jollakin onnellisella tiedemiehellä TULEVAISUUDESSA on nimittäin kunnia entropian täydellisestä selittämisestä. Sen vuoksi me emme täällä menneisyydessä voida saada päähämme mitään itua asiaa koskien.

Sivut

Suosituimmat

Uusimmat

Uusimmat

Suosituimmat