We Don't Know

Illustration by Marina Muun
Illustration by Marina Muun

At Rice, one of the country’s most recognized research and teaching institutions, we talk a lot about what we know — our scholarly expertise and the brilliant faculty, staff and students who drive the pursuit of knowledge. “Knowing” is the coin of the realm. So when a book titled “We Have No Idea: A Guide to the Unknown Universe” by alumnus Daniel Whiteson ’97 and Jorge Cham arrived in the mail, we were inspired to ask some of our distinguished faculty from various disciplines to tell us what they didn’t know. Our pitch to faculty was straightforward: “Tell us what we need to find out, what questions keep you up at night and what answers seem completely out of reach — for now.” The response was so robust that we decided to create a regular department in future issues of the magazine. Our scholars are pursuing frontiers of knowledge that range from the tiniest specks of life to the vast expanse of the universe. Their quests frequently lead to new questions, especially about the ethical implications of their findings. And to learn more about our inspiration — Whiteson and Cham’s entertaining, informative book — go here.


Solving Molecular Mysteries of the Cell

Peter Wolynes thinks about big questions and tiny structures in biology, chemistry and physics. His collaborative research at the Center for Theoretical Biological Physics is helping to map the architecture of the human genome.

Illustration by Marina Muun
Illustration by Marina Muun

In my lab, I’m trying to understand really fundamental things about biology. Our work often opens up more questions than answers. We first have to uncover how individual molecules in biological systems work, then see how they cooperate with each other. The ways that genes are regulated on a biochemical level is still a big question. We know regulation happens, but what’s really going on at the molecular level?

Biochemistry has made huge advances on such questions by taking molecules out of cells, finding out what they are and then showing that they’re involved in chemical reactions of various types. But we don’t always know if those molecules behave the same way inside a cell. Does the structure of the cell matter? Does it change anything? We do not know how structures are set up on the cell scale.

Protein folding is a great example of something we think we understand on a molecular level. We know for the most part how proteins fold — they’re long, stringy molecules, and certain spots on the molecule are attracted to other specific spots. As a protein shakes around randomly inside a cell, those spots eventually find each other and assemble the molecule in three dimensions in a very predictable way.

As you get to bigger structures like the DNA in the genome, though, it’s not clear exactly how the physics works. A chromosome is a much bigger molecule than a protein — so how much of its structure emerges automatically? Is it shaped by random jostling as well? Or is it powered by specific motors that push it into place? The mathematics of how to describe the architecture and machinery of the cell in molecular terms doesn’t exist. However, it affects everything in our bodies.

Peter Wolynes is the D.R. Bullard-Welch Foundation Professor of Science, a professor of chemistry, physics and astronomy, and materials science and nanoengineering. His lab is part of the Center for Theoretical and Biological Physics.


Wasted Energy

What if we could make chemical reactions on an industrial scale much more energy efficient? Naomi J. Halas sees the benefits.

Nuclear power plant

Right now, we don’t really know how to run chemical reactions on an industrial scale without using vast amounts of energy. For any chemical reaction to take place, an atom’s electrons need to be excited and moving at a specific rate. We’ve been doing that the same way for more than the last 100 years — by adding huge amounts of pressure and heat.

The problem is, that’s inefficient. To get the end product you want, you might start with three or four ingredients, create a reaction between them and wind up with your chemical — plus a few, or many, unwanted byproducts. The process also takes an incredible amount of electricity. At one large chemical plant I visited recently, they had their own reactors, and each one could power the city of Chicago. So how do we get around this? Ideally, we want to surgically re-engineer molecules to drive these reactions in a more focused way. Using nanoparticles, we might be able to insert energy into a reaction to make it happen on demand without creating wasteful byproducts.

Depending on the size of a nanoparticle, it would absorb sunlight at a very specific frequency. Bigger particles could absorb lower wavelengths, and smaller particles could absorb shorter ones. If you include these particles in certain reactions, you can design really specific energy levels right into the mix — you’d expose the whole thing to a sweep of white light, and they’d take the energy they needed in a surgical manner.

If we can do that, we can make ammonia for fertilizer sustainably and feed more people worldwide. We can make hydrogen fuel on demand for new types of cars. We can even remediate some of the carbon dioxide in the atmosphere that contributes to climate change. If we could really control chemistry, we’d have a more sustainable planet. It’d be as revolutionary as splitting the atom was in the last century.

Naomi J. Halas is the Stanley C. Moore Professor of Electrical and Computer Engineering and a professor of chemistry, bioengineering, physics and astronomy, and materials science and nanoengineering. She directs Rice’s Smalley-Curl Institute and is a member of both the National Academies of Sciences and Engineering.


Privacy, Truth and Historical Research

The historical record can be either lacking or abundant. Historian Peter C. Caldwell is interested in the many challenges surrounding modern history’s abundant data and source material.

Illustration by Marina Muun
Illustration by Marina Muun

Historians approach “knowledge” in a slightly different way than scientists might. In science, it’s all about what we’ll know in the future — the nature of the Big Bang or dark matter. for instance. We hypothesize and experiment, and eventually, get theoretical advances. In history, that’s sort of turned on its head. History involves information that was once known by someone, but in many cases, it’s been forgotten. The things that we’d love to “know” in our field have been left out of the historical record. They’re things that matter most to us in everyday life: what we eat, our religions, our culture. All that stuff is really important to us, but not necessarily recorded.

When it comes to modern history, though, we face the opposite problem. We have an abundance of sources, but they’re often totally unsurveyable. The number of documents from the past century alone that are kept in government storage is overwhelming. In the former East Germany, for example, we still haven’t worked through all the material the Stasi [secret police] left behind, and we’ve had 30 years to do it.

The emergence of this vast amount of data puts historians in an interesting position. How we make sense of all this stuff in its historical moment? Dealing with that much information poses a technical challenge on one hand and a social challenge on the other. If we pull out all those reams of information on so many people’s lives, what’s the political backlash? It could be used in horrible ways. The Stasi sort of perfected that.

Think about it in terms of the data privacy debate here in the U.S. Many of us feel uncomfortable if the government or corporations have access to information about our health, ethnicity, genes. It’s not so different in dealing with the Stasi files. To really examine this trove of data, historians have to study ourselves as part of the historical moment.

Peter C. Caldwell is the Samuel G. McCann Professor of History and department chair of history.


Our Common Universe

You, me, the stars and planets. What do we have in common? Frank Geurts explores the mysteries of the universe.

The stars and planets

In physics, there are a lot of things we don’t know! For instance, visible matter only makes up about 5 percent of the universe — the rest is stuff we haven’t figured out, like dark energy and dark matter. I work on that 5 percent of stuff we know about, and even then, I have to admit we still don’t know that much about it.

What I want to know is, What makes mass? It’s not something we generally think about. But all the visible stuff in the universe — the atoms and molecules that make up you and me, stars, planets — is built out of smaller structures we can’t see, but we can infer their existence through experiments.

We know, for instance, that the nucleus of every atom is made of particles called protons and neutrons. Protons give the atom mass, and by extension, give everything in the visible world mass. We also know that protons consist of three smaller particles called quarks. But if you take the masses of those quarks and add them all up, you don’t get a proton’s mass at all. It adds up to just a small fraction. So, where is the rest of that mass coming from? How do the interactions between the quarks add up to this, and what fundamental symmetries in nature play their role?

It’s still a bit of a mystery. Are the quarks in visible matter all there is? Or are there partners to these quarks that we haven’t been able to see with current accelerators? Are those mystery particles also responsible for dark matter? Even our most powerful particle accelerators aren’t at a point where we can make that observation.

Frank Geurts is an associate professor of physics and astronomy who studies nuclear physics.


The Achievement Gap

Ruth López Turley’s research asks fundamental questions about access to education.

Illustration by Marina Muun
Illustration by Marina Muun

There are huge inequities in education, but we don’t always understand why they exist and what can be done about them. And in cases where we do know what the problem is, the broader public often doesn’t have the knowledge or will to respond accordingly.

We know, for instance, that achievement gaps correlate strongly with school segregation, and those gaps will persist unless we target school segregation. So what do we do about it? That’s what researchers in my field are trying to figure out. How do you get people to make a decision that we consider to be personal — like where they decide to live or send their children to school — and steer them to do what’s best for society? How do we incentivize people to integrate without feeling like they’re sacrificing their kids’ future?

We need solutions on various levels. Regionally, if you look at the school districts in the center of Harris County where Houston is located, white students and advantaged students are highly underrepresented, but in the suburbs, they’re overrepresented. We don’t really have research that presents a solution to that problem.

We can learn from past efforts to integrate. We can look at what went wrong from those efforts and see what we can do today to make the effects of desegregation last longer. Mandatory busing, for example, worked but didn’t last. If we focus on incentivizing people instead, we could be far more productive. How do we do that? One way is to offer affordable housing in high-income neighborhoods, or inclusionary zoning where land use policy allows for low-income families to live in high-income communities. Another is to develop magnet schools designed to encourage families from varying racial, ethnic or economic groups to attend the same schools, coupled with targeted outreach and inexpensive transportation options. Research-practice partnerships like HERC, the Houston Education Research Consortium, inform decision-making with the goal of closing these achievement gaps. This type of research can help districts, especially neighboring districts, test and evaluate integration policies.

Ruth López Turley is a professor of sociology, the director of HERC and associate director of the Kinder Institute for Urban Research.


Bringing Back Movement and Motion

Can we bring movement back to a damaged central nervous system and treat disorders that impair motion? B.J. Fregly asks “what if?”

Central nervous system

Traumatic accidents, strokes, arthritis, Parkinson’s disease, cerebral palsy, even orthopedic cancer — all of those disorders can affect your ability to walk, reach and manipulate objects. When that happens, you often lose your independence.

If we could simulate via computer how a patient’s nervous system controls their movement, we could find better ways to treat movement disorders. The big question is, How do you do that mathematically? There are lots of different levels of complexity happening all at once. It’s common to think our brains control all movement — and that’s true to some extent — but a lot of signals also come from other places in ways we don’t understand.

Say you have a cat with a severed spinal cord — it can move its front legs, but its back legs are totally limp. You can suspend that cat over a treadmill, set its front legs walking and manually move its back legs. At a certain point, it’ll start walking on its own. This observation shows the importance of sensory feedback mechanisms. All that sensory information is going up to the spinal column and combining to create control signals that go back to the legs to walk.

What we want to know is, How do we apply that to people? In patients with incomplete spinal cord injuries, a similar training approach can sometimes get some control back to the limbs. If we had a model of the patient’s anatomy, physiology and nervous system, we could augment the information available to doctors and design optimal treatments for individual patients.

B.J. Fregly is a professor of mechanical engineering and Cancer Prevention and Research Institute of Texas scholar in cancer research.


The Social Network

Rachel Kimbro is uncovering the vital role of social networks in communities.

Illustration by Marina Muun

As sociologists, we know that the social links you have to others can help you in certain situations — but we don’t know how people activate those networks and use them to help their families in different social spheres, especially in disaster areas.

For example, food security is strongly linked with household socio-economic status. Yet many poor families in the U.S. are able to avoid food insecurity. So what is it about those communities that keep them secure?

We’ve found that lowerincome households seem to have slightly better protection from food insecurity if they live in a low-income neighborhood, but if they live in a high-income community, they actually have a higher risk. Why? Could it be that they’re sharing connections — sharing institutional help — like where you go to get food from a church or food pantry? Is it social network assistance? Is it institutions in that neighborhood that help them?

It’s important to understand how this kind of network activation works across all communities — not just low income, but middle class and affluent ones as well. Almost all my work has focused on low-income communities in the past, because I’m passionate about social justice. But there are things that we can learn from how affluent communities activate resources as well. For example, understanding why social resources are so substantial in these communities could help us better understand inequality in general, and if the ways affluent people activate social networks actually increases that inequality.

It’s not a one-way street — there are strengths to living in low-income communities, and more affluent areas can learn a lot from them. But when we’re creating programs that benefit either community, we need a better understanding of how its social networks function.

Rachel Kimbro ’01 is a professor of sociology who studies poverty and inequality, family and population health.


Mysterious Consciousness

Jeff Kripal’s research takes him to the outer edges of consciousness and probes the connections between science and the humanities.

Mysterious Consciousness

Ultimately, my goal is to try to understand what consciousness is. We just have no idea about that right now, but that may be partially due to the way we’re addressing it academically. The mistake a lot of researchers make is thinking that consciousness arises only from our brain structure. That’s one method of trying to understand it, but it isn’t enough. The nature of consciousness involves subjectivity, agency, freedom. It involves things that the sciences don’t have the tools to address, but the humanities do.

The paranormal is something that I’m particularly interested in. Now, I would agree that we do not have conclusive laboratory evidence for robust paranormal phenomena, although there is plenty of statistical evidence for minor, largely unconscious effects. The reason is simple. These robust forms spike in life circumstances like disease and injury — traumatic and deadly contexts. They’re also anecdotal, which simply means they are unique and content-specific, and cannot be replicated in the lab — so they’re often dismissed out of hand. But this assumes that the methods of the sciences are the only way of arriving at any kind of knowledge.

It’s the anomalies, the paranormal phenomena, that are often the most important things to look at and try to understand when studying religious experiences. Those are what create religious movements, but cannot be replicated or measured. The methods of the humanities are much more powerful and helpful than the sciences when it comes to studying these events. When you speak to a person who experiences something extraordinary, you treat the event as it presents itself — as a story, as a text. You try to ask, What is the historical context? The life context? The meaning or message that experience is communicating to this person?

These are typical questions in humanities. It also implies that the experiencer is not the only agent of an experience — that an event’s meaning has more to do with how it’s communicated to another person or community. To really understand religious experience and consciousness, the sciences need the humanities, and vice versa. We cannot deny or ignore either aspect if we’re going to solve problems at this scale.

Jeff Kripal is the J. Newton Rayzor Professor of Religion.


Living With Machines

Moshe Vardi is at the forefront of research into the growing role of machines in the workplace. How will humans live with our intelligent machines?

Illustration by Marina Muun
Illustration by Marina Muun

I’m a traditional computer scientist, so my one fundamental question is, What tasks are currently being done by humans, and how can we automate them with machines? It’s really all about mechanization or automation. I mean, the goal of ENIAC, the first programmable computer in 1946, was to compute ballistic tables for the military. That used to be done by people called computers. It was a job title, but now we mechanize it.

In a sense, as we get more and more sophisticated with our technology, more questions pop up that we didn’t think about before. Twenty years ago, it was, How do you find information on the web? After that, it was, How do you get meaningful information? Today, what keeps most computer scientists up at night is thinking about, What are the things we want machines to do in the future, and what’s the most economical way to do it?

That’s the first half of the night — the second half, the witching hours, are spent tossing and turning, thinking about the societal effects new technologies might have.

For example, if we make reliable self-driving cars, that would have a huge impact. Nearly 1.25 million people die in car crashes every year worldwide, many of which are caused by human error. We’re terrible drivers, so we could save lives! Sounds great, right? But then you start thinking, Wait, if we automate cars, it takes the jobs of 4 million drivers. What about them? It’s an important social job for lots of people.

For humanity, it’s an epic balancing act between mechanizing more and understanding what it ultimately means for us as a culture. The role of technology is determined by what we decide to do with it — so we’re really in charge of the process. But where is humanity going? That’s the biggest question of all.

Moshe Vardi is the Karen Ostrum George Distinguished Service Professor of Computational Engineering and director of the Ken Kennedy Institute for Information Technology.


A Better Human Genome?

Gang Bao is an expert in the emerging technology used to edit genes — fascinating tools with lifesaving and ethical implications.

Human Genome

The big question we want to answer is how to modify the human genome to treat genetic disorders. According to the Centers for Disease Control and Prevention, there are more than 6,000 “single gene” disorders, each of which are caused by genetic defects such as mutations in just one gene. They’re often really serious, like sickle cell anemia, and most have no cure — unless, that is, we figure out a way to repair the genome itself.

Right now, we’re able to do that to a limited degree. We have genetic tools that can replace a damaged section of a gene like replacing a section of leaky pipe. The question is, How effective is this approach? The best tool we have, called “CRISPR/Cas9,” is really powerful for editing specific genes, but it can also cause inadvertent mutations in other genes during the process (“off-target effects”).

My guess is it will take 10–20 years before we have a powerful, mature technology to do gene editing on people as a routine procedure in the clinics and can effectively modify human genes without running the risk of detrimental effects. To get there, we’ll need better tools and a better understanding of how to eliminate off-target effects. You don’t want to start doing this sort of thing in human patients without being able to predict the off-target effects of large-scale gene editing first.

There are also major ethical questions we’ll have to address. On one hand, if we can fix broken genes at birth instead of waiting for diseases to develop, we could save a lot of lives and dramatically improve people’s quality of life. That would be ideal. On the other hand, if we have technology that can edit DNA reliably, it opens the door to other choices — making a baby smarter, more beautiful or changing eye color. Is that sort of thing okay? Is it worth the risk of going through gene editing? The ethical question may be thornier than the technical one.

Gang Bao is the Foyt Family Professor of Bioengineering and a Cancer Prevention and Research Institute of Texas scholar in cancer research.

— David Levin

Body