Contact Us
Program: Recent Advances in Info-Metrics Research April 8, 2021
All times Eastern Daylight Time. Presentation titles link to abstracts 产别濒辞飞.听
Workshop Openers
Welcoming Remarks: Max Paul Friedman (Dean, College of Arts and Sciences, 猫咪社区app U)
10:00-10:10
Opener/Objectives: Aman Ullah (UC, Riverside)
10:10-10:20
听
Session I 鈥 Invited: Deep Inference10:20-11:10
- Chair
- Robin Lumsdaine (猫咪社区app U)
- Speaker
- (Trust Centre for Neuroimaging Institute of Neurology, UCL, UK)
Title: Deep Inference
Break: 11:10-11:20
听
Session II: Info-Metrics Across Sciences11:20-12:20
- Chair
- John Harte (UC Berkeley)
- Speaker 1
- (Leeds School of Business, U Colorado, Boulder)
Title: Entropy in Asset Pricing Research - Speaker 2
- (Computer Science and Director Center for Science of Info, Purdue)
Title: Precise Minimax Regret for Logistic Regression - Speaker 3
- (Department of Physics, SUNY-Albany)
Title: Entropic Dynamics on Statistical Manifolds
听
Session III: Information Processing in Cell-Cell Communication12:20-1:00
- Chair
- Nataly Kravchenko-Balasha (Laboratory of biophysics and cancer research, Hebrew U)
- Speaker 1
- (Racah Institute of Physics, Hebrew U, Jerusalem)
Title: Cell-to-Cell Information at a Feedback-Induced Bifurcation Point - Speaker 2
- (Department of Physics, Oregon State U)
Title: Information Flow in Multicellular Systems 鈥 from One to Many
Lunch Break: 1:00-1:30
听
Session IV: Information-Theoretic Econometrics, Statistics and Bayesian 1:30-2:30
- Chair
- Essie Maasoumi (Emory U)
- Speaker 1
- (Economics, U of Warwick)
Title: Identification of Beliefs in the Presence of Disaster Risk and Misspecification - Speaker 2
- (New School for Social Research)
Title: Bayesian Quantum Wave/amplitude Inference in Urn Mixture Models - Speaker 3
- (Economics, UC Riverside)
Title: A Flexible Information Theoretic Approach for Inference of Multiple Regression Function and Marginal Effects
听
Session V: Info-Metrics in Data Science 2:30-3:10
- Organizer
- Min Chen (Department of Engineering Science, Oxford U)
- Chair
- Duncan Foley (New School for Social Research)
- Speaker 1
- (Graphics and Imaging Laboratory, U of Girona)
Title: The Role of the Information Channel in Data Analysis - Speaker 2
Title: Finding a Bounded Measure for Estimating the Benefit of Bata Visualization
Break: 3:10-3:30
听
Session VI: Info-Metrics and AI Modeling 3:30-4:30
- Chair
- Eric Renault (Economics, U of Warwick)
- Speaker 1
- William F. Lawless (Math & Psychology, Paine College) and听Ira S. Moskowitz (NRL)
Title: Info-Metrics for Autonomous Human-Machine Teams and Systems (A-HMT-S) - Speaker 2
- (Economics, U of Utah) and Duncan Foley
Title: Unfulfilled Expectations and Labor Market Interactions: An Info-Metrics Approach - Speaker 3
- (Agricultural Economics, Texas A&M)
Title: Maximum Entropy Distribution Based on Quantile-grouped Interval Averages
Break: 4:30-4:45
听
Session VII: Info-Metrics in Action 鈥 Short Presentations 4:45-5:45
- Chair
- Rossella Bernardini Papalia (Department of Statistical Sciences, U Bologna)
- Speaker 1
- Tinatin Mumladze & Danielle Wilson (Department of Economics, 猫咪社区app U)
Title: Effect of Policy-relevant Factors on COVID-19 Patient Survival Probability: An Information-Theoretic Analysis - Speaker 2
- Irene Bardi (Department of Statistical Sciences, U Bologna)
Title: Big Data, Data Quality and Entropy:
A proposal for data quality analysis of digital platforms - Speaker 3
- Danielle Wilson (Department of Economics, 猫咪社区app U)
Title: An Info-Metrics Approach to Estimating the Supplemental Poverty Rates of Public Use Microdata Areas - Speaker 4
- Second Bwanakare (U of Information Technology and Management, Poland)
Title: Introduction to Non-Extensive Cross Entropy (NCE) for Social Phenomena Modelling
听
Session VIII 鈥 Invited: Veridical Data Science and AI 5:45-6:35
- Chair
- J. Michael Dunn (Indiana U, Bloomington)
- Speaker
- (Departments of Statistics and Electrical Engineering and Computer Sciences, UC Berkeley)
Title: Veridical Data Science for responsible AI: characterizing V4 neurons through DeepTune
Concluding Remarks
6:45-7:00
Amos Golan (猫咪社区app U and SFI) and J. Michael Dunn (Indiana U, Bloomington)
Abstracts
Listed in chronological order of the workshop program.
Deep Inference Karl J. Friston
In the cognitive neurosciences and machine learning, we have formal ways of understanding and characterising perception and decision-making; however, the approaches appear very different: current formulations of perceptual synthesis call on theories like predictive coding and Bayesian brain hypothesis. Conversely, formulations of decision-making and choice behaviour often appeal to reinforcement learning and the Bellman optimality principle. On the one hand, the brain seems to be in the game of optimising beliefs about how its sensations are caused; while, on the other hand, our choices and decisions appear to be governed by value functions and reward. Are these formulations irreconcilable, or is there some underlying imperative that renders perceptual inference and decision-making two sides of the same coin?
Key words: active inference 鈭 cognitive 鈭 dynamics 鈭 free energy 鈭 epistemic value 鈭 self-organization
听
Entropy in Asset Pricing ResearchMichel Stutzer
Three widely-studied, asset pricing research problems are the data-based prediction of derivative securities prices, the estimation and testing of extant asset pricing models, and the choice of shortfall-minimizing retirement portfolios. Entropy-based methods have been fruitfully used in all three areas. In addition to the familiar axiomatic, information-theoretic rationalization of entropic approaches to these problems, the statistical theory of large deviations provides an appealing frequentist rationale.
听
Precise Minimax Regret for Logistic RegressionP. Jacquet (INRIA), G. Shamir (Google) and W. Szpankowski (Purdue)
In this talk we shall discuss online logistic regression with binary labels and general feature values in which a learner sequentially tries to predict an outcome/ label based on data/ features received in rounds. Our goal is to evaluate precisely the (maximal)听听minimax regret which we analyze using a unique and novel combination of information-theoretic and analytic combinatoric tools such as Fourier transform, saddle point method, and Mellin transform in the multi-dimensional settings.
To be more precise, the pointwise regret of an online algorithm is defined as the (excess) loss it incurs over a constant comparator (weight vector) that is used for prediction. It depends on the feature values, label sequence, and the learning algorithm. In the maximal minimax scenario we seek the best weights for the worst label sequence over all label distributions.
For dimension d=o(T)1/3 we show that the maximal minimax 听regret grows as
d/o (log (2T/蟺) + cd听+ O(d3/2/鈭T)
where T is the number of rounds of running a training algorithm and cd is explicitly computable constant that depends on dimension d and data. For features uniformly distributed on a d-dimensional sphere or ball we estimate precisely the constant cd听showing that cd 词鈥(d/2)log(d/鈭2蟺) leading to the minimax regret growing for large d as听(d/2)log(T/d) 鈥 (鈭8蟺) + O(1). We also extend these results to non-binary labels. The precise maximal minimax regret presented here is the first result of this kind for any feature values and wide range of d.
听
Entropic Dynamics on Statistical Manifolds Ariel Caticha
Entropic dynamics is a framework in which the laws of dynamics are derived as an application of entropic methods of inference. The most detailed applications so far have been mostly in physics (quantum mechanics, quantum field theory, and gravity) but the basic ideas are applicable to any generic system described by a probability distribution. The dynamics unfolds on a statistical manifold that is automatically endowed by a metric structure provided by information geometry. The curvature of the manifold can exert a significant influence. A central feature is that the model includes an 鈥渆ntropic鈥 notion of time that is tailored to the system under study; the system is its own clock. As one might expect entropic time is intrinsically directional; there is a natural arrow of time which leads to a simple description of the approach to equilibrium.(This work has been carried out in collaboration with Pedro Pessoa and Felipe Xavier Costa.)
听
Cell-to-cell Information at a Feedback-induced Bifurcation Point A. Erez, T.A. Byrd, M. Vennettilli, A. Mugler
A ubiquitous way that cells share information is by exchanging molecules. Yet, the fundamental ways that this information exchange is influenced by intracellular dynamics remain unclear. Here we use information theory to investigate a simple model of two interacting cells with internal feedback. We show that cell-to-cell molecule exchange induces a collective two-cell critical point and that the mutual information between the cells peaks at this critical point. Information can remain large far from the critical point on a manifold of cellular states but scales logarithmically with the correlation time of the system, resulting in an information-correlation time trade-off. This trade-off is strictly imposed, suggesting the correlation time as a proxy for the mutual information.
听
Information Flow in Multicellular Systems
鈥 from One to Many Bo Sun听
In collaboration with Ayelet Lesman lab (TAU), Mike Overholtzer lab (MSKCC), and Assaf Zaritsky (Ben Gurion U)听with contributions from Amos Zamir, Assaf Nahum, Yishaia Zabary, Maor Boublil, Chen Galed.
Abstract forthcoming.
Related papers:
听
Identification of Beliefs in the Presence of Disaster Risk and Misspecification Saraswata Chaudhuri, Eric Renault,听Oscar Wahlstrom
This paper discusses the econometric underpinnings of Barro (2006)鈥檚 defense of the rare disaster model as a way to bring back an asset pricing model 鈥渋nto the right ballpark for explaining the equity-premium and related asset-market puzzles鈥. By construction, arbitrarily low-probability economic disasters can restore the validity of model-implied moment conditions only if the amplitude of economic disaster may be arbitrary large in due proportion. Unfortunately, we prove an impossibility theorem stating that in case of potentially unbounded disasters, there is no such thing as a population empirical likelihood-based model-implied probability distribution. In other words, one cannot identify some belief distortions for which the empirical likelihood-based implied probabilities in sample, as computed by Julliard and Ghosh (2012), could be a consistent estimator. This may lead to consider alternative statistical discrepancy measures to avoid the perverse behavior of empirical likelihood. By application of Csiszar (1995)鈥檚 generalized projections, we do prove that, under sufficient integrability conditions, power divergence Cressie-Read measures, with a positive power coefficient, properly define a unique population model-implied probability measure. However, when this computation is useful because the reference asset pricing model is misspecified, each power divergence will deliver a different model-implied beliefs distortion. Unfortunately, since empirical likelihood is ill behaved, there is no compelling argument for choosing one of these alternative discrepancy measures.
听
Maximum Entropy Distribution Based on Quantile-grouped Interval Averages Ximing Wu
Samples are often summarized by a frequency table of intervals with known bound- aries. It is widely known that the Maximum Entropy (ME) distribution based on this type of summary is the histogram. Another common format of data summary is the averages of fixed frequency intervals (for instance, average of each sample decile). The histogram is not applicable here since typically the defining interval boundaries are not reported. We show that a maximum entropy quantile function arises naturally from entropy optimization subject to this type of summary statistics. We establish the existence and uniqueness of this solution. An algorithm and closed form solutions for the ME quantile function and its associated distribution, density and Lorenz curve are derived. A number of illustrations are provided. We also compare the informativeness of equal interval grouping and equal frequency grouping within the diagram of ME estimation.
听
A Flexible Information Theoretic Approach for Inference of Multiple Regression Function and Marginal Effects Amos Golan, Tae-Hwy Lee, Millie Yi Mao, Aman Ullah
We develop an estimation procedure of multiple regression function and its derivatives (for marginal effects) based on multivariate maximum entropy (ME) density estimation. In estimating a multivariate density, the number of moment constraints in inferring a joint density function increases at a rate much faster than the dimension of the variables. This is because it depends on high order moments and cross moments of the variables. Hence, reducing the moment constraints by selecting the relevant moments is essential. We propose two methods for selecting the moment constraints. One is based on a Lasso-type regularization and the other is by testing the significance level of additional moment constraints. The ME-based estimators are asymptotically normal and root-n convergent. Monte Carlo simulations show that our proposed moment selection methods are consistent and produce the correct joint density estimation leading to superior estimation and prediction of the regression function and its derivatives. It is also shown that our entropy-based procedure outperforms nonparametric kernel-based methods in estimating and predicting the regression function and its derivatives. An empirical example of the out-of-sample prediction of the wage using education and experience shows that the ME method produces smaller mean squared prediction errors relative to parametric and nonparametric methods.
Key words: Shannon Entropy 鈭 Information theory 鈭 Multivariate maximum 鈭 entropy distributions 鈭 Multiple regression 鈭 Recursive integration 鈭 Moment constraint selection 鈭 Lasso
JEL Classification: C1, C3, C5
听
The role of the information channel in data analysis Mateu Sbert, Amos Golan, Miquel Feixas, Shuning Chen,听Marius Vila
The concept of information (or communication) channel was introduced by Shannon to model the communication between source and receiver. This concept has been general enough to be applied to the joint distribution of any two variables and, beyond its original application to communication, has shown its usefulness in understanding and modeling data in several branches of science. We review here the basic concepts of the information channel and consider its application to social sciences, and in particular to social accounting matrix.
听
Finding a Bounded Measure for Estimating the Benefit of Data Visualization Min Chen, Mateu Sbert, Alfie Abdul-Rahman,听Deborah Silver
Most measurement systems are not ground truth. They are functions that map some reality to some quantitative values, in order to aid the explanation of the reality and the computation of making predictions. The cost-benefit measure proposed by Chen and Golan is one of such functions. While the cost-benefit measure successfully captures trade-offs in various data intelligence workflows, the values could shoot up toward infinity easily, hindering the reconstruction of the reality from the measured values. Motivated by the practical needs for a more interpretable cost-benefit measure, we have started a journey to find a function where the measurement of "benefit" is bounded in relation to the entropy of the information space concerned.
听
Info-Metrics for Autonomous Human-Machine Teams and Systems (A-HMT-S) W.F. Lawless, Ira S. Moskowitz
The info-metrics for the performance of a team or an organization are likely to follow a rational approach that confronts noisy, incomplete and uncertain information organized around Shannon鈥檚 theory of information. When these metrics are applied, uncertainty is commonly located in methods, algorithms, models or incomplete data. But info-metrics as presently constituted have to be transformed to confront autonomous systems facing uncertain environments, especially when the uncertainty is caused by opposing autonomous systems (e.g., conflict, deception, competition). We address this more difficult problem in our research with our theory of the interdependence of complementarity.
Key words:interdependence 鈭 complementarity 鈭 bistability 鈭 uncertainty and incompleteness 鈭 non-factorable information and tradeoffs 鈭 info-metrics 鈭 geometry
听
Unfulfilled Expectations and Labor Market Interactions: An Info-Metrics Approach Ellis Scharfenaker & Duncan Foley
We apply the principle of maximum entropy to the problem of inferring equilibrium outcomes in a labor market comprised of informational-entropy constrained workers and employers. The model predicts persistent unemployment and job vacancies as a statistical feature of labor market interactions and unfulfilled expectations. A downward-sloping convex Beveridge curve emerges as a feature of the model. Worker and employer interactions also feedback on the wage and determine a statistical equilibrium wage distribution defined by individual behavioral parameters as well as market-level parameters.
听
Bayesian Quantum Wave/Amplitude Inference in Urn Mixture Models Duncan Foley
We compute the posterior probability of compositions of two urns from a sample drawn from them but without knowledge of which urn each sample observation comes from using both the classical probability method, where the frequency distributions describing the urns are real nonnegative numbers that sum to 1, and the quantum wave/amplitude method, where the frequency distributions describing the urns are complex numbers whose squared magnitudes sum to 1. When the source of sample observations is not known, these posterior probabilities differ due to an interference between the unobserved phases of the descriptions of the urns. The quantum wave method must imply posterior probabilities that compress the sample data at least as well as the classical method.
听
Effect of Policy-relevant Factors on COVID-19 Patient Survival Probability: An Information-theoretic Analysis Amos Golan, Tinatin Mumladze, Danielle Wilson
The possibility of reoccurring waves of the novel coronavirus that triggered the 2020 pandemic makes it critical to identify underlying policy-relevant factors that could be leveraged to decrease future COVID-19, and other SARS-related, death rates. We examine variation in several underlying, policy-relevant, country-level factors and COVID-19 patients death rates across twenty countries. We find three such factors that significantly impact the survival probability of patients infected with COVID-19. In order of impact, these are universal TB (BCG) vaccination policies, air pollution deaths and health-related expenditure. We quantify each probability change by age and sex. To deal with our small sample size, high correlations, and inference at the tails of the distribution, we use an information-theoretic inferential method that also allows us to introduce prior information. These priors are constructed from independent SARS data.
听
Big Data, Data Quality, and Entropy:
A proposal for data quality analysis of digital platforms Irene Bardi
Data is nowadays everywhere. During the past 20 years, the world data flow grew explosively in terms of speed, dimensions and types.
Data enters into every sphere of our life (society, economy, privacy and politics) with or without our knowledge of it happening. We are currently living in a data economy. This phenomenon is called 鈥楤ig Data鈥, and in this work it will be presented from different points of view: definitions, sources, history, properties and, most important, its capacity to produce value.
The value of data is strictly connected to its use: through a process of elaboration and aggregation as well as by using of algorithms, Big Data is used to create a typed schema of individual behaviour that reveals correlation insights between choices, behaviours, actions, preferences, etc. Consequently, Big Data is a useful means to understand the needs of consumers and to build highly predictable models for supply and demand, services, cultural content, etc. but also to show their evolution. However, even if the amount of data is constantly growing, such amount can create a real value only if combined with quality: the poor Data Quality of Big Data can easily lead to big errors, as well as to low efficiency and decision-making mistakes. 听
In this work I summarized many aspects regarding the Data Quality of Big Data, analysing the related challenges by providing definitions, indicators and good practices; I also reported a hierarchical structure of a Data Quality framework, together with some methods and techniques for the Data Quality improvement.听Then I described the application of an innovative approach to the Data Quality assessment that implies the use of the Entropy concept from the Information theory. Lastly, I described the phases of a Data Quality Project that I took part in a real company, and some of the procedures that were done to improve the Data Quality.
听
An Info-Metrics Approach to Estimating the Supplemental Poverty Rates of Public Use Microdata Areas Danielle Wilson
The Supplemental Poverty Measure (SPM) is an extension of the Official Poverty Measure (OPM) that considers non-cash benefits, tax credits and necessary expenses when determining an individual鈥檚 poverty status. The U.S. Census Bureau annually produces SPM estimates using the Current Population Survey鈥檚 Annual Social and Economic Supplement (CPS-ASEC). Although the CPS-ASEC collects detailed income and relationship data, it can only produce estimates at the national and state level. Although the larger 猫咪社区app Community Survey (ACS) can produce estimates at a more disaggregate level, it does not collect the detailed data necessary to produce SPM estimates. In 2020, the Census Bureau published a series ACS micro-data sets with imputed values for the necessary components considered by the SPM. However, the ACS estimates differ from their CPS-ASEC counterparts at both the national and state level. These differences are likely due to the additional imputation needed in the ACS but their magnitude is unknown at the Public Use Microdata Area (PUMA) level. Using an information-theoretic approach proposed by Papila & Fernandez-Vasquez (2020), the relative error of disaggregate SPM rates in the ACS is estimated by constraining their weighted average to more reliable state-level CPS-ASEC SPM rates. The estimated (relative) errors are subsequently used to refine all original average PUMA ACS rates for 2016 to 2018.
听
Introduction to Non-extensive Cross-entropy (NCE) for Social Phenomena ModellingSecond Bwanakare
Motivation: The Tsallis NCE is suitable for statistical modelling of complex(non-ergodic) systems. It generalises the Shannon- Gibbs entropy and depicts the level of complexity of the system.
Main topics:
Power law(PL) in social sciences
Tsallis entropy vs Shannon-Gibbs entropy
q-Tsallis generalized Kullback_Leibler Information Divergence (q- TKL_ID))
Inference Application
听
Veridical Data Science for Responsible AI: Characterizing V4 Neurons through DeepTune Bin Yu
"AI听is like nuclear energy 鈥 both promising and dangerous" 鈥 Bill Gates, 2019.
Data Science is a pillar of AI听and has driven most of recent cutting-edge discoveries in biomedical research. In practice, Data Science has a life cycle (DSLC) that includes problem formulation, data collection, data cleaning, modeling, result interpretation and the drawing of conclusions. Human judgement calls :wq:ware ubiquitous at every step of this process, e.g., in choosing data cleaning methods, predictive algorithms and data perturbations. Such judgment calls are often responsible for the "dangers" of AI听To maximally mitigate these dangers, we developed a framework based on three core principles: Predictability, Computability and Stability (PCS). Through a workflow and documentation (in R Markdown or Jupyter Notebook) that allows one to manage the whole DSLC, the PCS framework unifies, streamlines and expands on the best practices of machine learning and statistics 鈥 bringing us a step forward towards veridical Data Science.
The PCS framework will be illustrated through the development of the DeepTune framework for characterizing V4 neurons. DeepTune builds predictive models using DNNs and linear regression and applies the stability principle to obtain stable interpretations of 18 predictive models. Finally, a general DNN interpretation method based on contextual decomposition (CD) will be discussed with applications to sentiment analysis and cosmological parameter estimation.