Fundamental physical limits to information
The laws of physics result in fundamental limits to
information processing. In particular, energy consumption
cannot be made arbitrarily small--the second law of
thermodynamics sets limits on efficiency. An irreversible
one-bit operation has to dissipate at least kT ln(2) joules
(this is often called "Landauer's principle"), and
conversely, one bit of information about the state of a
system can be leveraged to extract useful energy (work) in
the amount of at most kT ln(2) joules from a heat bath at
temperature T (k is the Boltzmann constant).
These ultimate limits are achievable under conditions that
are unrealistic for real world information processing
systems: (i) operations have to be performed arbitrarily
slowly, (ii) all relevant information has to be observable,
(iii) all actions have to be chosen optimally. But
interactive observers (often called "agents") in the real
world usually do not have the luxury of operating
arbitrarily slowly. On the contrary, speed is crucial, and
these systems are therefore mostly not in thermodynamic
equilibrium. They also typically find themselves in
partially observable situations, and usually face
constraints on what they can do, having limited control.
What are general bounds on thermodynamic efficiency that
apply to real world systems? When those bounds are optimized
over all possible strategies with which an agent can
represent data and act on it, do general rules for optimal
information processing emerge? Can this optimization
furthermore give us concrete design principles for learning
Implications from this line of reasoning are broad. On the
one hand, it could lead to a unifying theory of learning and
adaptation that is well grounded in physical reality, on the
other hand it could lead to design principles for novel,
highly energy efficient, computing hardware.
Thermodynamics of prediction
We study the energetics of learning machines using
far-from-equilibrium thermodynamics, an area that has gained
increasing traction since Jarzynski's work relation was
published in 1997. We addressed the thermodynamics of
prediction  for systems driven arbitrarily far from
thermodynamic equilibrium by a stochastic environment. These
systems, by means of executing their dynamics, implicitly
produce models of the signal that drives them. We showed that
there is an intimate relation between thermodynamic
inefficiency, measured by dissipation, and model inefficiency,
measured by the difference between memory and predictive
power. As a corollary, Landauer's principle, generalized to
systems that operate arbitrarily far away from thermodynamic
equilibrium, is refined: processes running at finite rates
encounter an extra cost that is proportional to the
non-predictive information they retain. The dynamics of far
from equilibrium systems with finite memory thus have to be
predictive in order to achieve optimal efficiency.
Thermodynamics of inference and optimal data
More recent work has focused on generalizations of the above
treatment. Common wisdom
places information theory outside of physics, highlighting
the broad applicability of a discipline rooted in
probability theory. The connection to statistical mechanics
is, however, tight, as was emphasized, for example, by E. T.
Jaynes. I've shown how both Shannon’s
rate-distortion theory and Shannon’s channel capacity can be
directly motivated from thermodynamic arguments : The
maximum work potential a data representation can have is
directly proportional to Shannon’s channel capacity. The least
effort required to make a data representation (consisting of
measurement and memory) is governed by the information
captured about the data by the representation. Inference
methods that extract relevant information are thus not outside of
physics, but have, instead, a very tangible physical
justification: minimizing the least physical effort
necessary for representing given data, subject to a fixed
fidelity (or utility), produces an encoding that is optimal
in the sense of Shannon’s rate-distortion theory . It is not
inconceivable that von Neumann, Wiener and Shannon had these
ideas in the back of their minds when they developed
measures of information. However, the analysis we use here
hinges upon the notion of nonequilibrium (or generalized) free
energy, which emerged much later, and which is now becoming
a common tool in the study of systems operating far from
thermodynamic equilibrium (such as living systems).
In the most general setup, information-to-work
conversion happens within partially observable systems. I
showed that the thermodynamic efficiency of generalized,
partially observable, information engines is limited by how
information is retained by the data representation .
Optimizing for energy efficiency thus leads to a general rule
for data acquisition and information processing: retain only
information that is predictive of the quantities to be
inferred. In other words: predictive inference can be derived
from a physical
principle. The generalized lower bound on dissipation can be
directly minimized over all possible data representation
strategies to yield strategies that least preclude efficiency.
Mathematically, this procedure results in the derivation of a
concrete, known and widely used, method for lossy compression
and machine learning, called Information Bottleneck (Tishby,
Pereira and Bialek, 1999).
Strongly coupled systems - marginalized and conditioned
machines that can use feedback to interact with the process
they are learning about form a strongly coupled system with
the environment (and/or with each other if the driving
process is another agent). Much recent work has been devoted
to understanding the thermodynamics of strongly coupled,
interacting systems. The second law of thermodynamics was
originally stated by Clausius as “The entropy
of the universe tends to a maximum.” In practice,
measuring the entropy of the entire universe is difficult.
Alternatively, the second law can be applied to any system
isolated from outside interactions (a universe unto itself).
Of course, perfectly isolating any system or collection of
systems from outside influences is also difficult. Over the
last 150 years, thermodynamics has progressed by adopting
various idealizations which allow us to the isolate and
measure that part of the total universal entropy change that
is relevant to the behavior of the system at hand. These
idealizations include heat reservoirs, work sources,
measurement devices, and information engines. We showed that
we do not need, in principle, to resort to the usual
idealizations : Conditional and marginalized versions of
the Second Law hold locally, even when the system of
interest is strongly coupled to other driven,
The theory we developed here lays the foundation for our
ongoing investigations into the thermodynamics of interactive
learning (more about interactive learning below).
Realistic information engines
Recently, much activity has focused on designing thought
experiments, centered around machines that convert information
(which is presented as physical realizations of bits on a
tape) into work by extracting energy from a heat bath. Some of
these imagined machines seem to be capable of extracting a
significant amount of work by utilizing temporal correlations.
It is important to ask if these models are realistic, in a
sense that they could, in principle, be built. It turns out
that this is, unfortunately, not always the case. We point out
that time-continuous dynamics are necessary to ensure that the
device is physically realizable. Enforcing this constrains
possible designs, and drastically diminishes efficiency .
We show that these problems can be circumvented by means of
applying an external, time-varying protocol. This turns the
device from a "passive" free-running machine into an
"actively" driven one that requires input work to function.
It is perhaps not surprising that actively driven machines are
ubiquitous in biology.
N particle q-partition Szilard engine
Szilard's famous 1929 Gedankenexperiment serves as a
foundation for studying information-to-work-conversion. We
calculated the maximal average work that can be extracted when
not one, but rather N particles are available, and the
container is divided into q partitions . For a work
extraction protocol that equalizes the pressure, we find that
the average extracted work is proportional to the mutual
information between one-particle position and the counts of
how many particles are in each partition.
Real world information engines
To test the predictions of [1, 2, 6], and others made for
information engines that run at finite speed, we proposed a
series of experiments that should allow us to either develop
concrete building blocks for a "thermodynamic computer", or
to clarify why and how reversible computation is infeasible.
This is a joint project with John Bechhoefer (experiment)
and David Sivak at SFU, funded by the Foundational Questions
Institute. We are
currently interviewing candidates for a postdoctoral
position. Interested candidates should send me an
-  S. Still (2019) Thermodynamic
cost and benefit of data representations Phys.
Rev. Lett. (in press) arXiv:1705.00612
-  E. Stopnitzky, S. Still, T. E. Ouldridge and L.
Altenberg (2019) Physical
limitations of work extraction from temporal
correlations Phys. Rev. E 99 042115.
-  J. Song, S. Still, R. Díaz Hernández Rojas, I. Pérez
Castillo, M. Marsili (2019) Optimal work
extraction and mutual information in a generalized
Szilárd engine Phys. Rev. E (under review) arXiv:1910.04191
-  G. E. Crooks and S. Still (2019) Marginal
and Conditional Second Laws of Thermodynamics
EPL (Europhysics Letters) 125, 4, 40005.
-  S. Still (2014) Lossy is lazy.
Proc. Seventh Workshop on Information Theoretic Methods in
Science and Engineering (WITMSE-2014), eds. J. Rissanen,
P. Myllymäki, T. Roos, and N. P. Santhanam
-  S. Still, D. A. Sivak, A. J. Bell, and G. E. Crooks
Rev. Lett. 109, 120604
of information processing in living systems" Foundational Questions
of Agency"(partly); Foundational Questions
with the Fetzer Franklin Fund.
demon in the real world"
; with John Bechhoefer (PI) and
David Sivak; Foundational Questions
Invited Conference Talks and Summer Schools
- 11/14-18/2019 Montreal
Artificial Intelligence and Neuroscience (MAIN),
- 07/20-25/2019 The
Foundational Questions Institute 6th International
Conference, Tuscany, Italy.
- 07/11-12/2019 The
Physics of Evolution, Francis Crick Institute,
- 08/26-31/2018 Runde
Workshop, Runde Island, Norway.
- 02/08/2018 Non-equilibrium
dynamics and information processing in biology,
Okinawa Institute of Science and Technology, Japan (remote
- 11/18/2016 Statistical
Physics, Information Processing and Biology,
Santa Fe Institute, Santa Fe, NM
- 09/25/2016 Information,
Control, and Learning--The Ingredients of Intelligent
Behavior, Hebrew University, Jerusalem, Israel
- 08/20/2016 Foundational
Questions Institute, 5th International Conference,
- 04/25/2016 Spring
in the Physics of Complex Systems International
Center for Theoretical Physics (ICTP), Trieste, Italy.
- 7/14-17/2015 Conference
Sensing, Information and Decision at the Cellular Level
- 5/4-6/2015 Workshop
as Computation". Beyond Center for
Fundamental Concepts in Science.
- 4/8-10/2015 Workshop
on Entropy and Information in Biological Systems
National Institute for Mathematical and Biological
- 10/26-31/2014 Biological
and Bio-Inspired Information Theory Banff,
- 7/5-8/2014 Seventh
Workshop on Information Theoretic Methods in Science
- 5/8-10/2014 Statistical Mechanics Foundations of
Complexity–Where do we stand? Santa Fe Institute.
The Foundational Questions Institute Fourth
International Conference, Vieques Island, PR.
- 6/26-28/2013 Modeling Neural Activity (MONA) Kauai,
- 01/2011 Workshop on
measures of complexity Santa Fe Institute, Santa
- 01/2011 - Berkeley
Mini Stat. Mech. Meeting.
Invited Seminars and Colloquia
- 08/2018 - Institute
for Theoretical Physics (ITP), ETH Zuerich,
- 08/2018 - Institute
for Neuroinformatics, University of Zuerich,
- 07/2018 - IST,
- 06/2018 - Google
Deepmind, Montreal, Canada.
- 06/2018 - Facebook AI,
- 11/2016 - Condensed
Matter Seminar, UC Santa Cruz.
- 08/2016 - Biophysics
Seminar, Simon Fraser University, Vancouver,
- 06/2013 - Max Planck
Institute for Dynamics and Self-organization,
- 04/2013 - Scuola
Internazionale Superiore di Studi Avanzati (SISSA)
- 03/2013 - Physics
Department, The University of Auckland, Auckland,
- 03/2013 - Physics
Department, The University of the South Pacific,
- 11/2012 - Center for
Mind, Brain and Computation Stanford University.
- 09/2012 - Physics
Colloquium University of Hawaii at Manoa.
- 10/2011 - Redwood
Center for Neuroscience, University of California
- 08/2011 - Institute
for Neuroinformatics, ETH/UNI Zürich,
- 11/2011 - Symposium in
honor of W. Bialek’s 50th Birthday, Princeton
University, Princeton, NJ.
Optimal data representation and the
Information Bottleneck framework
As we have seen above, the Information Bottleneck method
arises naturally from an energy efficiency argument. It is also
conceptually satisfying, as it makes minimal assumptions (in
particular no assumptions about the statistics of the
underlying process are necessary), yet versatile (such
knowledge can be built in, and the method can also be
combined with other estimation methods), and widely
Information Bottleneck framework in practice.
studied how this method can be used when learning from
finite data , and derived a criterion akin to the AIC,
used to prevent over-fitting. We showed that this procedure
produces the desired results on synthetic data [9, 11], as
well as used it in practice to rule out overly complicated
models in practical applications [15, 17]. I generalized
the Information Bottleneck method to recursive, interactive learning
. This generalized Information Bottleneck framework
provides not only a way to better understand known models of
dynamical systems [9, 10], but also a way to learn them from
data. We showed mathematically that the core concepts of
Jim Crutchfield's ``computational mechanics" can be derived
as limiting cases of the generalized Information
Bottleneck framework, applied to time series [7, 9]. Overall, the
generalized Information Bottleneck framework provides not
only a constructive method for predictive inference
from which learning algorithms can be derived, but also a general
information theoretic framework for data processing that is
well grounded in physics, as I have argued in .
Predictive inference in the presence of feedback from the
Living systems learn by interacting with their environment,
in a sense they "ask questions and do experiments", not only
by actively filtering the data but also by perturbing, and,
to some degree, controlling the environment that they are
learning about. Ultimately, one would like to understand the
emergence of complex behaviors from simple first principles.
To ask about simple characteristics of policies which would
allow an agent to optimally capture predictive information,
I extended the Information Bottleneck approach to the
situation with feedback from the learner, and showed that
optimal encoding in the presence of feedback requires action
strategies to balance exploration with control . Both
aspects, exploration and control, emerge in this treatment
as necessary ingredients for behaviors with maximal
predictive power. The reason why they both emerge is the
This study resulted in a novel algorithm for recursively
learning optimal models and policies from data, which my
student Lisa Miller has applied to selected problems in
robotics (work in progress). In the context of reinforcement
learning this approach allowed us to study  how
exploration emerges as an optimal strategy, driven by the
need to gather information, rather than being put in by hand
as action policy randomization.
Invited Talks at Conferences and Summer Schools
- 09/2010 Eigth Fall Course on Computational
Neuroscience}, Bernstein Center for Computational
Neuroscience, and Max Planck Institute for Dynamics and
Self-Organization, Goettingen, Germany.
- 08/2009 Keynote Lecture. 2nd International Conference on
Guided Self-Organization (GSO)}, Leipzig, Germany.
- 07/2009 Chaos/Xaoc, Conference Center of the National
Academy of Sciences in Woods Hole, MA.
- 08/2008 Sante Fe Institute Complex Systems Summer School
at the Institute of Theoretical Physics, Chinese Academy
of Sciences (CAS), Beijing, China.
- 09/2008 Ecole Recherche Multimodale d'Information
Techniques & Sciences (ERMITES); Universite du Sud
Toulon-Var, Laboratoire des Sciences de l'Information et
des Systemes, Association Francaise de la Communication
Parlee; Giens, France.
- 09/2009 European Conference on Complex Systems, Warwick
(ECCS ‘09), Workshop on Information, Computation, and
- 04/2006 Bellairs Reinforcement Learning Workshop,
- 12/2005 Neural Information Processing Systems (NIPS),
Workshop on ``Models of Behavioral Learning'', Vancouver,
- 07/2004 Kavli Institute for Theoretical Physics (KITP),
University of California, Santa Barbara. Program:
Understanding the Brain.
Invited Seminars and Colloquia
- 04/2010 University of British Columbia, Canada, Physics
- 03/2010 University of Victoria, Canada, Physics
- 01/2010 University of California at Berkeley, Redwood
Center for Theoretical Neuroscience.
- 12/2009 Universitaet Koeln, Germany, Physics Department.
- 11/2009 International Center of Theoretical Physics
(ICTP), Trieste, Italy.
- 04/2009 University of California at Davis, Computational
Science and Engineering Center, Davis, CA.
- 10/2008 Max Planck Institute for Biological Cybernetics,
Machine Learning Seminar, Tuebingen, Germany.
- 09/2007 University of Montreal, Montreal, Canada.
Department of Computer Science.
- 09/2007 McGill University, Montreal, Canada.
McGill-UdeM-MITACS Machine Learning Seminar.
- 03/2007 University of California at Davis, Computational
Science and Engineering Center, Davis, CA.
- 01/2007 TU Munich, Institute of Computer Science,
- 01/2007 ETH Zuerich, Institute for Neuroinformatics,
- 01/2007 IDSIA, Institute for Artificial Intelligence
(Istituto Dalle Molle di Studi sull'Intelligenza
Artificiale), Lugano, Switzerland.
- 01/2007 ETH Zuerich, Institute of Computer Sciences,
- 01/2007 University of Hawai'i at Manoa, Physics
- 07/2006 Max Planck Institute for Biological Cybernetics,
- 06/2006 McGill University, Montreal, Canada. Department
of Computer Science.
- 04/2005 University of Hawai'i at Manoa, Honolulu, HI,
- 09/2005 University College Dublin, Dublin, Ireland.
- 04/2005 University of Hawai'i, Hilo, Hilo, HI,
Department of Computer Science.
- 04/2005 University of Hawai'i, Manoa, Honolulu, HI,
Department of Electrical Engineering.
- 03/2003 University of British Columbia, Vancouver,
Canada, Department of Physics.
- 08/2003 Humboldt University, Berlin, Germany,
Theoretical Biology Seminar.
- 08/2003 Hamilton Institute, National University of
Ireland, Maynooth, Ireland. Machine Learning and Cognitive
- 08/2003 University of Hawai'i, Honolulu, HI. Department
of Electrical Engineering.
- 07/2003 Max Planck Institute for Biological Cybernetics,
Tuebingen, Germany, Machine Learning Seminar.
- 07/2003 ETH Zuerich, Switzerland, Institute for
- 04/2003 Columbia University, New York, NY, Applied
Quantum machine learning
ultimately have to obey quantum mechanics. With the
advent of quantum computers, and with mounting
evidence for the importance of quantum effects in certain
biological systems, understanding efficient use of quantum
information has become increasingly important.
We generalized the Information Bottleneck framework
to quantum information processing .
Renato Renner (ETH Zurich), and members of his group, we are
now working towards extending the approach I proposed in 
to quantum systems.
Students interested in this research should email me for
possible projects. This would be on a Master's or PhD thesis
of Agency"(partly); Foundational Questions
Institute with the Fetzer Franklin Fund.
(08/24-10/31, 2019) Pauli Center for theoretical
studies, ETH Zuerich, Switzerland.
Rules of Information Acquisition and
Observers, biological and man made alike, do not gain
anything by choosing strategies to acquire and represent
data which would not allow them to operate, in principle, as
close to the physical limits as possible. This does not mean
that they always will operate optimally--in certain
situations that might be either not possible or a
disadvantage (for example, to make sure a process runs in
one direction, excess dissipation might be necessary). It
just means that observers should choose the structure of
their strategies, not the execution, to allow for achieving
the limits. Then they have the freedom to invest other
resources to achieve the limits whenever necessary (e.g.
invest time to achieve energy efficiency).
Hence, we may postulate the following principle: Observers
chose the general rules they use to acquire and process
information in such a way that the fundamental physical
limits to information processing can be reached as closely
(Again, remember that "can be reached" and "will be
achieved" are two different statements, and the second one
obviously would not yield a reasonable postulate, as counter
examples exist in nature).
Applied to energy efficiency, we know that the "rule" that
emerges is to write down predictive information and leave
out irrelevant information. What rule(s) might emerge from
speed limits? What are the speed limits for mesoscopic and
macroscopic observers? What rules emerge from limits on
robustness? What are those limits, and how should we even
quantity robustness of information processing? Is it
possible that the emerging set of rules might serve as
axioms for an operational approach to quantum mechanics?
of Agency"(partly); Foundational Questions
Institute with the Fetzer Franklin Fund.
With students and collaborators, we apply the theory developed
in the lab, together with other machine learning methods, to
problems of interest. I try to keep the focus on applications
that are of scientific relevance and/or have some potential
positive impact on society.
Since 2008, the world has been reminded of the importance of
having a stable global financial system. It is usually the
poor that suffer most from crashes, and therefore, preventing
instability becomes a moral imperative. As scientists, we have
little or no control over most relevant factors, such as
political decision making. But, it does fall into my area of
expertise to work on improving the mathematical tools used in
the finance sector.
Textbook portfolio optimization methods used in quantitative
finance produce solutions that are not stable under sample
fluctuations when used in practice. This effect was discovered
by a team of physicists, lead by Imre Kondor, and
characterized using methods from statistical physics. The
instability poses a fundamental problem, because solutions
that are not stable under sample fluctuations may look optimal
for a given sample, but are, in effect, very far from optimal
with respect to the average risk. In the bigger picture,
instabilities of this type show up in many places in finance,
in the economy at large, and also in other complex systems.
Understanding systemic risk has become a priority since the
recent financial crisis, partly because this understanding
could help to determine the right regulation.
The instability was discovered in the regime in which the
number of assets is large and comparable to the number of data
points, as is typically the case in large institutions, such
as banks and insurance companies. I realized that the
instability is related to over-fitting, and pointed out that
portfolio optimization needs to be regularized to fix the
problem. The main insight is that large portfolios are
selected by minimization of an emperical risk measure, in a
regime in which there is not enough data to guarantee small
actual risk, i.e. there is not enough data to ensure that
empirical averages converge to expectation values. This is the
case because the practical situation for selecting large
institutional portfolios dictates that the amount of
historical data is more or less comparable to the number of
assets. The problem can be addressed by known regularization
methods. Interestingly, when one uses the fashionable
"expected shortfall" risk measure, then the regularized
portfolio problem results in an algorithm that is closely
related to support vector regression. Support vector
algorithms have met with considerable success in machine
learning and it is highly desirable to be able to exploit them
also for portfolio selection. We gave a detailed derivation of
the algorithm , which slightly differs from a previously
known SVM algorithm due to the nature of the portfolio
selection problem. We also show that the proposed
regularization corresponds to a diversification ''pressure".
This then means that diversification, besides counteracting
downward fluctuations in some assets by upward fluctuations in
others, is also crucial for improving the stability of the
solution. The approach we provide here allows for the
simultaneous treatment of optimization and diversification in
one framework which allows the investor to trade-off between
the two, depending on the size of the available data set.
In two follow-up papers [14, 16] we have characterized the
typical behavior of the optimal liquidation strategies, in the
limit of large portfolio sizes, by means of a replica
calculation, showing how regularization can remove the
instability. We furthermore showed how regularization
naturally emerges when market impact of portfolio liquidation
is taken into account. The idea is that an investor should
care about the risk of the cashflow that could be generated by
the portfolio if it was liquidated. But the liquidation of
large positions will influence prices, and that has to be
taken into account when computing the risk of the cash that
could be generated from the portfolio. We showed which market
impact functions correspond to different regularizers, and
systematically analyzed their effects on performance .
Importantly, we found that the instability is cured (meaning
that the divergence goes away) for all Lp norms with p > 1.
However, for the fashionable L1 norm, things are more
complicated. There is a way of implementing it that does cure
the instability, but the most naive implementation may not -
it may only shift the divergence.
Geological surface processes are relevant in the context of
understanding physical mechanisms underlying volcanism, and
the temporal evolution of planets and moons in the Solar
System. In a team including former students C. Hamilton (lead)
and W. Wright, we analyzed the spatial distribution of
volcanic craters on Io, a moon of Jupiter, believed to be more
vulcanologically active than any other object in the Solar
System. The extreme volcanism on Io results from tidal
heating, but its tidal dissipation mechanisms and magma ascent
processes are poorly constrained. Our results may help narrow
down the possible mechanisms underlying Io's volcanism by
putting constraints on physical models .
Large scale multi-disciplinary research efforts often face the
problem that synergy might be impeded by lack of knowing which
research from other disciplines relates (sometimes in
unexpected ways) to, or may inspire ones own research.
Document classification can be applied here to build
visualization interfaces as library science tools to aid
multi-disciplinary collaborations. This is of relevance, as it
may help to increase communication between groups, increase
synergy, reduce redundancy, and perhaps even occasionally spur
scientific creativity. In the ``old days", each trip to the
library could turn into an adventure as one walked down the
aisles and got distracted from the main purpose of the trip by
some interesting looking titles. Before one knew it, one was
reading something unexpected, and had found a new idea, a new
approach. The taste of these adventures has changed in the
digital age. To a certain extent, they have been replaced by
online browsing, but the sheer volume of information is often
a limiting factor, and there may well be use for tools that
help organize relevant information in an intuitive way. As
part of a NASA funded study, we developed such a tool in the
context of astrobiology, an area that spans many fields, from
chemistry to biology, to astronomy, with my student L. Miller
and my colleague R. Gazan .
With my student Elan Stopnitzky, we applied Gavin Crooks'
notion of non-equilibrium maximum entropy hyperensembles to
the abundancy of building blocks for life , and found that
the spontaneous emergence of life looks less unlikely if one
does not assume that conditions on early Earth were
consistently in or near thermodynamic equilibrium.
Heterogeneous non-equilibrium driving appears as a possible
catalyst, the more so, the further away from equilibrium
chemical reactions happened.
prediction and global volcanic activity profiles
Wright of HIGP has 18 years of thermal emission data
from 110 volcanoes around the Earth, recorded twice daily.
We are analyzing this data with a variety of machine
learning techniques, with the ultimate goal of predicting
thermal output trends and classifying volcanoes by their
This project is perfect for a student in CS, or HIGP, or
Physics, and can be done as 499 or 699.
composition of volcanic rocks and their classification
has built an impressive corpus of
chemical composition data of volcanic rocks, mostly from
Iceland. We have done some preliminary cluster analysis of
this data and are interested in continuing work with the
ultimate goal of predicting rock origin (which volcano and
possibly which erruption) from chemical composition.
This project is perfect for a CS student, and could be done as
499 or 699.
-  E. Stopnitzky and S. Still (2019) Non-equilibrium odds
for the emergence of life Phys.
Rev. E 99 052101.
-  F. Caccioli, I. Kondor, M. Marsili and
Risk And Instabilities In Portfolio Optimization.
Theor. Appl. Finan. 19,
-  L. J. Miller, R. Gazan and S. Still (2014) Unsupervised
Document Classification and Visualization of
Unstructured Text for the Support of Interdisciplinary
Collaboration. Proc. 17th ACM Conf. Computer
Supported Cooperative Work and Social Computing
-  F. Caccioli, I. Kondor,
M. Marsili and S. Still (2013) Optimal
liquidation strategies regularize portfolio selection.
The European Journal
of Finance, 19 (6), 554-571
-  Hamilton CW, C Beggan, S Still, M Beuthe, R Lopes,
D Williams, J Radebaugh, and W Wright (2013) Spatial
distribution of volcanoes on Io: implications for tidal
heating and magma ascent. Earth and Planetary Sciences
Letters 361, pp. 272–286 (pdf)
(Paper was reported on by several news agencies, including
-  S. Still and I. Kondor: Regularizing
Portfolio Optimization (2010) New
Journal of Physics 12 075034
Talks (about regularized portfolio optimization):
- 11/2016 - Santa Fe Institute, NM.
- 11/2011 - Applied Math
Seminar, City College New York, NY.
NASA, ``A 30 Year,
Multi-Sensor Analysis of Global Volcanic Thermal Unrest"
(co-PI). PI: Robert Wright (HIGP, UHM),
This material is presented to ensure timely
dissemination of scholarly and technical work. Copyright
and all rights therein are retained by authors or by other
copyright holders. All person copying this information are
expected to adhere to the terms and constraints invoked by
each author's copyright. In most cases, these works may
not be reposted without the explicit permission of the