Tuesday, 26 May 2015

Conduct Disorder as a Substance Abuse Risk Factor

In this series of research reviews on conduct disorder several important findings are evident.

  1. Conduct disorder (CD) commonly evolves into adult antisocial personality disorder
  2. Conduct disorder in children often presents along with attention deficit hyperactivity disorder and learning problems
  3. CD in childhood and adolescence raises risk for alcohol, drug and nicotine dependence.
Margaret Sibley and colleagues recently published a study of CD and ADHD and later initiation and escalation of the use of alcohol, cigarettes and cannabis.

In her study, 113 children with ADHD were assessed between the ages of 5 and 18 years of age. A control group of 65 children without ADHD were similarly assessed during this developmental period.

Twelve percent of the childhood ADHD later met criteria for a diagnosis of CD. This contrasts with only 1.5% of the control group. Also noteworthy was the high rate of oppositional defiant disorder in the ADHD group (59%) compared to only 5% of the control group.

Significant differences emerged in substance use. These key findings included the following:
  • ADHD adolescents were more likely to have ever smoked a cigarette (47% v 28%) and were much more likely to be daily smokers (27% v 6%).
  • ADHD adolescents were not more likely not more likely to have ever tried marijuana (53% v 51%) but were more likely to use on at least a weekly frequency (23% v 8%).
  • There were no differences in the ADHD group compared to controls in ever use of alcohol or frequent drinking
  • Maternal drinking in early childhood was the strongest predictor of adolescent alcohol use

The authors also found another important finding:
"escalating CD symptoms in childhood were viewed as a mediator of the relationship between ADHD and cigarette and marijuana use."

The authors noted in the longitudinal data analysis that increasing ADHD symptom endorsement predicted more CD symptoms. More CD symptoms was the strongest predictor of later substance use.

The take home message for clinicians is early identification and treatment of ADHD in children is important. Early identification and treatment of ADHD holds the promise for modifying later CD and substance use morbidity.

This is an important study teasing out some of the issues in ADHD/CD overlap and later substance use risk. Readers with more interest in this topic can access the free full-text manuscript by clicking on the PMID link in the citation below.

Graphic figure is an original created by the author using Canva.

Follow the author on Twitter @WRY999

Sibley MH, Pelham WE, Molina BS, Coxe S, Kipp H, Gnagy EM, Meinzer M, Ross JM, & Lahey BB (2014). The role of early childhood ADHD and subsequent CD in the initiation and escalation of adolescent cigarette, alcohol, and marijuana use. Journal of abnormal psychology, 123 (2), 362-74 PMID: 24886010

Wednesday, 20 May 2015

How Much Water Do You Need Each Day?

Raza Ahmad, MD, discusses the question of how much water to drink on a given day. Dr. Ahmad practices at Delancey Internal Medicine, located at Penn Medicine Washington Square.

Raza Ahmad, MD
Raza Ahmad, MD
We all know the human body is predominantly made up of water. We also know that the body thrives when it is properly hydrated. But what is "proper hydration?" Do we really need to be running to the water cooler every 15 minutes to fill up our water bottle? And, do we really need to drink so much water? Can we drink other liquids instead? The answer is…well, a bit fluid.

Why Water is So Important

Water is a vital part of your body’s overall health — because it affects every cell of your body. It is a huge component of muscle and helps you produce energy. This is why you’ll often get a cramp when you work out without proper hydration.

Water also helps the body to maintain its temperature, remove waste and lubricate joints.

“Hydration has the greatest impact on training, performance and recovery," says Dr. Ahmad. "Dehydration, even at the lowest level, can result in impaired performance.”

How Do I know I am Hydrated Enough?

Dehydration occurs when your body loses more fluids, mostly water, than it is taking in. With dehydration, more water is moving out of our cells and body than what we take in through drinking.

The signs and symptoms of dehydration range from minor to severe. Increased thirst, dry mouth, weakness, dizziness, confusion and fatigue are all signs that an individual may be lacking the proper amount of water.

Many think that dry mouth is one of the first signs of dehydration. If you are experiencing dry mouth, though, your body is likely already craving water. If you are concerned that you may be dehydrated, a good way to check is to check the color of your urine.

If your urine is anything but clear, you are lacking the water your body needs. Yellow and orange are not the colors of healthy urine. If your urine is brown, you need to speak with your physician.

Should I Only Drink Water?

It is rare, but you can actually drink too much water.

As much as you need water, too much in your system can create an imbalance between water and electrolytes. When you sweat or lose water through other ways, you lose electrolytes. Replenishing your body with just water can dilute the electrolytes that are already low in your system. This could lead to your sodium levels becoming very low. When this occurs, you could become very ill. Although this is rare, it does occur.

During intense physical activity where you are sweating quite a bit, you may want to grab a sports drink or coconut water rather than plain water. This will help to better replace sodium lost in sweat.

"For slower athletes, it is recommend to drink according to thirst," said Dr. Ahmad. "Elite level athletes should follow a recovery plan, which includes drinking water and carbohydrate-electrolyte drinks."

How Much Water Do I Need Each Day?

This is last for a reason. Although the question is rather simple, the answer is much more complicated.

Unfortunately, there really is no one-size-fits-all approach to the amount of water you should consume on a daily basis.

how much water to drink each dayAs a general rule of thumb, you should try to drink between half an ounce and an ounce of water for each pound you weigh, every day. For example, if you weigh 200 pounds, it is recommended you drink 100 ounces of water if you are performing non-strenuous activities.

If you are going to be working out or hiking, you definitely should add to those 100 ounces. It is recommended that you drink 12 ounces of water a couple hours before your activity and then another 12 ounces about 30 minutes prior to the start. Be sure to also drink throughout the activity.

Other factors to keep in mind when thinking about your daily water consumption include the environment and any illnesses or health conditions. Hot or humid air makes you sweat more and will require additional intake of fluid. If you are sick and vomiting or have a fever, you will again be losing fluid more quickly and need to replenish.

As you can tell, being properly hydrated is important but not a perfect science. The best approach is to listen to your body and look for the feedback signs.

"I can not emphasize enough the importance of hydration," says Dr. Ahmad. "It has the greatest impact on training performance and recovery. Lack of hydration can lead to severe problems, including muscle breakdown, kidney injury and electrolyte imbalances."

Keep Calm and Hydrate on!

Have additional questions or concerns?

Tuesday, 19 May 2015

If The Shoe Fits…Run!

Daniel C. Farber, MD, Foot and Ankle Surgeon, discusses the importance of picking the proper running shoe and offers advice while going through the selection process. 

To avid runners, there is much more to the sport than simply lacing up any type of running shoe. They know that selecting the proper shoe is an incredibly important part of preparing for any type of competition.

Running Shoe Selection
Daniel C. Farber, MD
If you're wearing a shoe that doesn't fit your feet properly, you have a better chance of developing an improper gait or poor biomechanics. This added pressure on the heel or ball of the foot could lead to pain, plantar fasciitis, stress fractures, hallux valgus, (bunions) and lesser toe deformities.

When you're standing in the shoe store, most running shoes feel comfortable. Still, the true test doesn't come until you've pounded the pavement for several miles.

"Not all shoes are the same. Make sure you choose a shoe that fits your foot shape and running style," Dr. Farber recommends. "Remember that the most expensive shoe is not necessarily the best. The midrange shoes often have the more proven technology without the trendy price."

So, how do you sift through the many different brands and styles to ensure your feet stay happy and you stay pain-free?
  • Try on all shoes. Sizes among shoe brands and styles tend to vary. Don’t select shoes by the size marked inside the shoe. Judge the shoe by how it fits on your foot.
  • Select a shoe that conforms as nearly as possible to the shape of your foot.
  • Have your feet measured regularly. The size of your feet changes with age. For women, it may change during pregnancy.
  • Have BOTH feet measured. Most people have one foot larger than the other. Always fit to the larger foot.
  • Get fitted at the end of the day. The best time to measure your feet is at the end of the day when your feet are largest.
  • Stand during the fitting process. Standing allows you to check that there is adequate space (3/8" to 1/2") for your longest toe at the end of each shoe.
  • Comfort is important. Make sure the ball of the foot fits comfortably into the widest part (ball pocket) of the shoe.
  • Don’t expect them to stretch. Avoid purchasing shoes that feel too tight, expecting them to “stretch” to fit.
  • Minimize slippage. Your heel should fit comfortably in the shoe with a minimum amount of slippage.
  • Take a stroll. Walk in the shoes to make sure they fit and feel right.
Finally, remember that knowing when to replace your running shoes is just as important as picking the proper pair. Dr. Farber recommends that running shoes should be replaced every 3-6 months (or 300-500 miles) as they show signs of wear.

"With less activity, once a year is adequate Dr. Farber said. "If a shoe is in good condition, but has lost the cushioning of its insole, you can oftentimes simply rehab it with an over-the-counter insert."

"However, if the shoe looks and feels worn and doesn't support your foot well, replace it no matter what the age."

Have questions about a new exercise program?

Are Women Athletes More Susceptible to Injury?

Erik Thorell, DO, discusses if gender plays a role in individuals being more susceptible to injury. Dr. Thorell practices at Penn Medicine Woodbury Heights.

In a perfect world, every run would be completely pain-free. No soreness, no aches and no lingering effects from the previous workout. Unfortunately, many runners constantly deal with a slight disturbance. There are things that can be done by, both men and women, to reduce the risk of injury.

Injuries to athletes
Regardless of how careful you are, injuries do occur. And, for women, the rate of injury is slightly higher. Runner’s knee, stress fractures, shin splints and plantar fasciitis are all injuries that are more common with female runners.

“One anatomical difference between men and women leading to greater predisposition to lower extremity injuries is the wider female pelvis, which results in a larger Q-angle,” says Erik Thorell, DO. “This results in increased stress across the knee in particular.”

Simply put, men and women are built differently. Women tend to have smaller, weaker muscles supporting their knees, as well as more lax ligaments. They typically have a larger hip width to femoral length ratio, which leads to greater hip adduction (muscles located towards the lateral portion of the thigh contract and pull the thigh away from the midline of the body). Females are also more at risk of certain injuries because there is added motion in their hips and pelvis.

When it comes to bone injuries, females are, again, more susceptible than their male counterparts. Women have smaller bone dimensions and are predisposed to lower bone density. Also, estrogen, a hormone in women that protects bones, decreases sharply as women age. All of these factors increase the risk of broken bones.

“Though gender differences do predispose women more to certain musculoskeletal injuries, attention to bone health, nutrition, core strengthening and a well-structured exercise routine can mitigate some of these problems,” explains Dr. Thorell.

Tips to Reduce the Risk of Injury

Because women suffer sports injuries more often than men, it is important they take extra care prior to playing sports or exercising. Below we offer certain exercises and other helpful tips:
  • Leg lifts, back bridges and standing hip flexors help to improve motion and flexibility in the hip and glutes area.
  • Weight-bearing exercises help to build and maintain bone density.  Attend dance classes, go for hikes, pick up aerobics or simply get into fast walking.
  • Balance exercises, such as Tai-Chi, can help strengthen legs.
  • Wear proper footwear and work out on appropriate (not very hard) surfaces.
  • Don’t suddenly intensify or lengthen your workouts.

Need help coming up with a fitness plan to keep you healthy?

Monday, 18 May 2015

Brain Imaging and Conduct Disorder: Temporal Lobe Abnormalities

Conduct disorder is a complex behavioral disorder with significant risk for later adult psychopathology.

There is increasing evidence for a biological basis for conduct disorder.


Twin studies show a significant genetic contribution to the disorder.


Brain imaging studies also point to biological factors in conduct disorder.


Gregory Wallace and colleagues recently published a structural MRI study of conduct disorder in 22 adolescents between the ages of 10 and 18. Conduct disorder subjects were compared to a group of 27 age-matched controls on imaging measures.


This study focused on measures of brain cortex thickness, brain surface area and degree of brain folding or gyrus formation.


Conduct disorder was linked to the following brain structural abnormalities:



  • Reduced cortical thickness in the superior temporal lobes
  • Reduced gyrus formation in the ventromedial frontal cortex
  • Reduced volume of the the amygdala and striatum (putamen and pallidum) 

The research group also found a negative correlation between superior temporal lobe thickness and psychometric measures of callousness/unemotional styles.


The mechanism for the temporal lobe to be involved in the symptoms of conduct disorder is unclear. Cortical thinning in this region has been found in adults with psychopathy. 


The authors note that amygdala/temporal lobe integration is necessary for stimulus-reinforcement learning. This integration may explain some of the deficits found in the current study.


This study will be important for continuing research in the genetics and pathophysiology involved in conduct disorder. Intervention strategies will need to address potential biological deficits contributing to the behavioral and learning problems in conduct disorder.

Readers with more interest in this research can access the free full-text manuscript by clicking on the PMID link in the citation below.

Image of brain with superior temporal lobe highlighted in green is an iPad screen shot from the Brain Tutor app.

Follow the author on Twitter @WRY999

Wallace GL, White SF, Robustelli B, Sinclair S, Hwang S, Martin A, & Blair RJ (2014). Cortical and subcortical abnormalities in youths with conduct disorder and elevated callous-unemotional traits. Journal of the American Academy of Child and Adolescent Psychiatry, 53 (4), 456-650 PMID: 24655655

Sunday, 17 May 2015

Microbiome Nonsense: response to "Chowing Down On Meat"

Response to "Chowing Down On Meat"

As the claim that animal protein and saturated fat is unhealthy becomes less and less tenable, those who have the intuition that animal-based nutrition must be bad for you are looking elsewhere.

There was great excitement at the end of 2014 about a study posted in Nature demonstrating the rapid changes in human gut microbes in response to animal-based vs. plant-based diets [1]. The paper is very interesting, and it has a lot of original data of a kind we've often wished for. The authors then go on to interpret their findings without apparent restraint.

A report on the study on NPR called Chowing Down On Meat, Dairy Alters Gut Bacteria A Lot, And Quickly gets right to the point:

"Looks like Harvard University scientists have given us another reason to walk past the cheese platter at holiday parties and reach for the carrot sticks instead: Your gut bacteria will thank you."

and finally:

""I mean, I love meat," says microbiologist Lawrence David, who contributed to the study and is now at Duke University. "But I will say that I definitely feel a lot more guilty ordering a hamburger ... since doing this work," he says."

That's right. The excitement in the blog-o-sphere was not so much about the clear results — that the changes in the gut flora in response to diet are fast and large — but about the authors' opinions that the observed changes support a link between meat consumption and inflammatory bowel disease (IBD).

We take exception to these claims, as they are not well-founded by the data in the study, or in any other study. The data to support them do not warrant the conclusion. We consider it irresponsible at best to suggest that a dietary practice is harmful to health when the evidence is weak, especially when one is in a position of authority and subject to high publicity.

Here are the points we address:

The Claims about Inflammatory Bowel Disease

Here are some quotes from the paper stressing the possible dangers of a carnivorous diet based on a supposed link to IBD — inflammatory bowel disease. Notice that they use language that implies the claims are proven, when as we will show, they are not.

"increases in the abundance and activity of Bilophila wadsworthia on the animal-based diet support a link between dietary fat, bile acids and the outgrowth of microorganisms capable of triggering inflammatory bowel disease [6]" — Abstract

"Bile acids have been shown to cause inflammatory bowel disease in mice by stimulating the growth of the bacterium Bilophila[6], which is known to reduce sulphite to hydrogen sulphide via the sulphite reductase enzyme DsrA (Extended Data Fig. 10)." — from figure 5, page 4.

"Mouse models have also provided evidence that inflammatory bowel disease can be caused by B. wadsworthia, a sulphite-reducing bacterium whose production of H2S is thought to inflame intestinal tissue [6]. Growth of B. wadsworthia is stimulated in mice by select bile acids secreted while consuming saturated fats from milk. Our study provides several lines of evidence confirming that B. wadsworthia growth in humans can also be promoted by a high-fat diet. First, we observed B. wadsworthia to be a major component of the bacterial cluster that increased most while on the animal-based diet (cluster 28; Fig. 2 and Supplementary Table 8). This Bilophila-containing cluster also showed significant positive correlations with both long-term dairy (P , 0.05; Spearman correlation) and baseline saturated fat intake (Supplementary Table 20), supporting the proposed link to milk-associated saturated fats[6]. Second, the animal-based diet led to significantly increased faecal bile acid concentrations (Fig. 5c and Extended Data Fig. 9). Third, we observed significant increases in the abundance of microbial DNA and RNA encoding sulphite reductases on the animal-based diet (Fig. 5d, e). Together, these findings are consistent with the hypothesis that diet-induced changes to the gut microbiota may contribute to the development of inflammatory bowel disease." — last paragraph, emphasis ours.

This concern is prominent in the paper; they start with it and end with it. It is based on a single citation to a study in mice.

Reasons those claims are not warranted

Let's look at that study (Dietary-fat-induced taurocholic acid promotes pathobiont expansion and colitis in Il10−/− mice [2]):

Here's the abstract (emphasis ours):

"The composite human microbiome of Western populations has probably changed over the past century, brought on by new environmental triggers that often have a negative impact on human health1. Here we show that consumption of a diet high in saturated (milk-derived) fat, but not polyunsaturated (safflower oil) fat, changes the conditions for microbial assemblage and promotes the expansion of a low-abundance, sulphite-reducing pathobiont, Bilophila wadsworthia2. This was associated with a pro-inflammatory T helper type 1 (TH1) immune response and increased incidence of colitis in genetically susceptible Il10−/−, but not wild-type mice. These effects are mediated by milk-derived-fat-promoted taurine conjugation of hepatic bile acids, which increases the availability of organic sulphur used by sulphite-reducing microorganisms like B. wadsworthia. When mice were fed a low-fat diet supplemented with taurocholic acid, but not with glycocholic acid, for example, a bloom of B. wadsworthia and development of colitis were observed in Il10−/− mice. Together these data show that dietary fats, by promoting changes in host bile acid composition, can markedly alter conditions for gut microbial assemblage, resulting in dysbiosis that can perturb immune homeostasis. The data provide a plausible mechanistic basis by which Western-type diets high in certain saturated fats might increase the prevalence of complex immune-mediated diseases like inflammatory bowel disease in genetically susceptible hosts."

Translation:

They took some mice who were particularly susceptible to colitis, and also some regular mice, and fed them one of three different diets: a low fat diet (if we're reading it correctly they used the AIN-93M Purified Diet from harlan, which is about 10% fat), or a diet with 37% fat which was either polyunsaturated, or saturated milk fat. They didn't specify the amount of carbohydrate or protein, but we assume the diets were about 10-15% protein, leaving about 50% carbohydrate.

The mice who had the high milk-fat diet had a significant increase in the gut bacteria called Bilophila wadsworthia. The susceptible mice on the high milk-fat diet got colitis at a high rate (more than 60% in 6 months). The other susceptible mice, those on low-fat or polyunsaturated fat also got colitis, but at a lower rate (25-30%). The regular mice didn't get colitis, even on the high milk-fat diet.

What's the problem with knockout mice?

The mice that got colitis were susceptible because they were genetically manipulated to not function normally. Specifically, they couldn't produce something called interleuken-10 (IL-10). IL-10 has many complex actions including fighting against inflammation in multiple ways.

The argument made by the scientists is that Bilophila wadsworthia must induce inflammation, and that colitis probably comes about in people who are less effective at fighting that inflammation, just like the knockout mice. This seems intuitive, but it is certainly not proven by the experiment.

Look at it this way:

Suppose we didn't know the cause of phenylketonuria, a genetic disorder that makes the victim unable to make enzymes necessary to process the amino acid phenylalanine. We could knockout that gene in an animal, feed it phenylalanine, watch it suffer retardation and seizures, and conclude that phenylalanine must promote brain problems. This would be a mistake, of course. Phenylalanine is an essential amino acid occurring in breast milk. As far as we know, there is nothing unhealthy about it, as long as you don't have a genetic mutation interfering with its metabolism.

It is, of course, possible that Bilophila wadsworthia inflames the colon. As a hypothesis, based on this study, it is not by itself objectionable.

What we object to is the leap to citing Bilophila wadsworthia as causing colitis, as in the second excerpt above, which we repeat here:

"Bile acids have been shown to cause inflammatory bowel disease in mice by stimulating the growth of the bacterium Bilophila[6], which is known to reduce sulphite to hydrogen sulphide via the sulphite reductase enzyme DsrA (Extended Data Fig. 10)." — from figure 5, page 4.

In fact, Bilophila did not appear to affect the normal mice at all!

There is no claim that the genetic mutation in the mice has any relation to genetic susceptibility to IBS in humans, yet it is implied that natural human susceptibility might work the same way.

Hydrogen Sulfide

In the knockout mice study, a second experiment was done to determine whether the Bilophila wadsworthia seen in the milk-fat condition came from a particular bile acid, taurocholic acid. They fed the knockout mice a low fat diet supplemented with either taurocholic acid (TC), or glycocholic acid (GC). They confirmed that Bilophila wadsworthia was increased by taurocholic acid and not by glychocholic acid.

What else do we know about taurocholic acid?

According to the authors of this study, it is "a rich source of organic sulphur, […] resulting in the formation of H2S [hydrogen sulfide]". In one figure they even demonstrated the presence of Bilophila wadsworthia by the presence of H2S.

But H2S can be beneficial:

  • There is emerging evidence that H2S has diverse anti-inflammatory effects, as well as pro-inflammatory effects, possibly only at very high levels [3].
  • The levels needed for harm are probably higher than occurs naturally [4]
  • H2S levels in the blood are associated with high HDL, low LDL, and high adiponectin in humans [5], all considered good things.

Moreover, there is now evidence that colon cells in particular can actually use H2S as fuel, and lots of it. Other researchers have used a a similar argument in the opposite way. They claim that eating fiber is healthy, because of the butyrate generated from it in the colon, which colons cells then use as fuel. While we have problems with that argument, it shows a pervasive bias: Using it when it supports plants, but ignoring it when it doesn't.

Taking all this into account, it is not at all clear that the higher levels of sulfite reducing bacteria seen in the meat and cheese eaters was unhealthy.

What would happen if a human sufferer of IBS went on an animal foods only diet?

It's clear that these researchers are not studying IBS at all. They were studying gut bacteria, found an association, and cherry-picked one study suggesting that what they found in the animal diet results might be unhealthy.

If they were studying IBS, they might have noticed reasons to hypothesise that a diet low in fiber [6], [7], carbohydrates [8], or fermentable carbohydrates [9] would help IBS sufferers. If humans who are susceptible to IBS are susceptible in the same way as the knockout mice in the cited study, then these results might be surprising. Instead, these results in combination with the animal diet paper, should further decrease our belief that the mice results have any relevance at all.

Moreover, unless the authors are advocating a diet of low-fiber, low-carb plants (can't think of any plants like that off the top of my head...), they are encouraging IBS sufferers to eat foods that may worsen their condition.

We don't know what would happen in an all meat trial for IBS, but we'd love to find out.

In Sum

The supposed link between the animal diet and inflammatory bowel disease is composed of a chain of weak links:

A kind of bacteria they found in those eating meat and cheese was also found in a mouse study that suggested a link between the bacteria and IBS.

However:

  • It used animals that were genetically engineered to not function normally.
  • It did not and cannot establish causality between the observed gut bacteria changes and the increased level of disease.
  • It was merely an observation of the two coinciding along with a plausible mechanism, i.e. a clever story about how this might be a causal relationship.

This plausible mechanism is not as clean a story as it appears. Presenting it as such is downright misleading.

References

[1]

Diet rapidly and reproducibly alters the human gut microbiome

Lawrence A. David, Corinne F. Maurice, Rachel N. Carmody, David B. Gootenberg, Julie E. Button, Benjamin E. Wolfe, Alisha V. Ling, A. Sloan Devlin, Yug Varma, Michael A. Fischbach, Sudha B. Biddinger, Rachel J. Dutton & Peter J. Turnbaugh
Nature (2013) doi:10.1038/nature12820
[2]

Dietary-fat-induced taurocholic acid promotes pathobiont expansion and colitis in Il10−/− mice

Suzanne Devkota, Yunwei Wang, Mark W. Musch, Vanessa Leone, Hannah Fehlner-Peach, Anuradha Nadimpalli, Dionysios A. Antonopoulos, Bana Jabri & Eugene B. Chang
Nature (2012) doi:10.1038/nature11225
[3]

Evidence type: review and non-human animal experiment

Wallace JL.
Trends Pharmacol Sci. 2007 Oct;28(10):501-5. Epub 2007 Sep 19.

The notion of H2S being beneficial at physiological concentrations but detrimental at supraphysiological concentrations bears similarity to the situation with nitric oxide (NO), another gaseous mediator, which shares many biological effects with H2S. Also in common with NO, there is emerging evidence that physiological concentrations of H2S produce anti-inflammatory effects, whereas higher concentrations, which can be produced endogenously in certain circumstances, can exert pro-inflammatory effects [5]. Here, I focus on the anti-inflammatory effects of H2S, and on the concept that these effects can be exploited in the development of more effective and safer anti-inflammatory drugs. "

[4]

Evidence type: review and non-human animal experiment

Wallace JL.
Trends Pharmacol Sci. 2007 Oct;28(10):501-5. Epub 2007 Sep 19.

(Emphasis ours)

"How much H2S is physiological? "H2S is present in the blood of mammals at concentrations in the 30–100 m M range, and in the brain at concentrations in the 50–160 m M range [1–3]. Even after systemic administration of H2S donors at doses that produce pharmacological effects, plasma H2S concentrations seldom rise above the normal range, or do so for only a very brief period of time [24,27]. This is, in part, due to the efficient systems for scavenging, sequestering and metabolizing H2S. Metabolism of H2S occurs through methylation in the cytosol and through oxidation in mitochondria, and it is mainly excreted in the urine [1]. It can be scavenged by oxidized glutathione or methemoglobin, and can bind avidly to hemoglobin. Exposure of certain external surfaces andtissues to H2S can trigger inflammation [28], perhaps because of a relative paucity of the above-mentioned scavenging, metabolizing and sequestering systems. The highest concentrations of H2S in the body occur in the lumen of the colon, although there is some disagreement [29] as to whether theconcentrations of ‘free’ H2S really reach the millimolar concentrations that have been reported in some studies [30,31]. Although often alluded to [32,33], there is no direct evidence that H2S causes damage to colonic epithelial cells. Indeed, colonocytes seem to be particularly well adapted to use H2S as a metabolic fuel [4]. "There have been several suggestions that H2S might trigger mutagenesis, particularly in the colon. For example, one recent report [33] suggested that the concentrations of H2S in ‘the healthy human and rodent colon’ are genotoxic. Despite the major conclusion of that study, the authors observed that exposure of cultured colon cancer epithelial cells (i.e. transformed cells) to concentrations of Na2S as high as 2 mM for 72 hours did not cause any changes consistent with a genotoxic effect (nor cell death). It was only when the experiments were performed in the presence of two inhibitors of DNA repair, and only with a concentration of 2 mM, that they were able to detect a significant genotoxic signal. It is also important to bear in mind that the concentrations of H2S used in studies such as that described above are often referred to as those found in the ‘healthy’ colon. Clearly, if concentrations of H2S in the healthy colon do reach the levels reported, and if H2S has the capacity to produce genotoxic changes and/or to reduce epithelial viability, there must be systems in place to prevent the putative untoward effects of this gaseous mediator – otherwise, the colon would probably not be ‘healthy’"

[5]

Evidence type: observational

Jain SK, Micinski D, Lieblong BJ, Stapleton T.
Atherosclerosis. 2012 Nov;225(1):242-5. doi: 10.1016/j.atherosclerosis.2012.08.036. Epub 2012 Sep 10.

"Hydrogen sulfide (H2S) is an important signaling molecule whose blood levels have been shown to be lower in certain disease states. Increasing evidence indicates that H2S plays a potentially significant role in many biological processes and that malfunctioning of H2S homeostasis may contribute to the pathogenesis of vascular inflammation and atherosclerosis. This study examined the fasting blood levels of H2S, HDL-cholesterol, LDL-cholesterol, triglycerides, adiponectin, resistin, and potassium in 36 healthy adult volunteers. There was a significant positive correlation between blood levels of H2S and HDL-cholesterol (r=0.49, p=0.003), adiponectin (r=0.36, p=0.04), and potassium (r=0.34, p=0.047), as well as a significant negative correlation with LDL/HDL levels (r= -0.39, p=0.02). "

[6]

Evidence type: preliminary experiment

J. T. Woolner and G. A. Kirby
Journal of Human Nutrition and Dietetics Volume 13, Issue 4, pages 249–253, August 2000

"Abstract

Introduction High-fibre diets are frequently advocated for the treatment of irritable bowel syndrome (IBS) although there is little scientific evidence to support this. Experience of patients on low-fibre diets suggests that this may be an effective treatment for IBS, warranting investigation.

Methods Symptoms were recorded for 204 IBS patients presenting in the gastroenterology clinic. They were then advised on a low-fibre diet with bulking agents as appropriate. Symptoms were reassessed by postal questionnaire 4 weeks later. Patients who had improved on the diet were advised on the gradual reintroduction of different types of fibre to determine the quantity and type of fibre tolerated by the individual.

Results Seventy-four per cent of questionnaires were returned. A significant improvement (60–100% improvement in overall well-being) was recorded by 49% of patients.

Conclusion This preliminary study suggests that low-fibre diets may be an effective treatment for some IBS patients and justifies further investigation as a full clinical trial."

[7]

Evidence type: Review

Eswaran S1, Muir J, Chey WD.
Am J Gastroenterol. 2013 May;108(5):718-27. doi: 10.1038/ajg.2013.63. Epub 2013 Apr 2.

"Abstract

Despite years of advising patients to alter their dietary and supplementary fiber intake, the evidence surrounding the use of fiber for functional bowel disease is limited. This paper outlines the organization of fiber types and highlights the importance of assessing the fermentation characteristics of each fiber type when choosing a suitable strategy for patients. Fiber undergoes partial or total fermentation in the distal small bowel and colon leading to the production of short-chain fatty acids and gas, thereby affecting gastrointestinal function and sensation. When fiber is recommended for functional bowel disease, use of a soluble supplement such as ispaghula/psyllium is best supported by the available evidence. Even when used judiciously, fiber can exacerbate abdominal distension, flatulence, constipation, and diarrhea."

[8]

Evidence Type: uncontrolled experiment

Austin GL, Dalton CB, Hu Y, Morris CB, Hankins J, Weinland SR, Westman EC, Yancy WS Jr, Drossman DA.
Clin Gastroenterol Hepatol. 2009 Jun;7(6):706-708.e1. doi: 10.1016/j.cgh.2009.02.023. Epub 2009 Mar 10.

"Abstract Background & Aims

Patients with diarrhea-predominant IBS (IBS-D) anecdotally report symptom improvement after initiating a very low-carbohydrate diet (VLCD). This is the first study to prospectively evaluate a VLCD in IBS-D. Methods

Participants with moderate to severe IBS-D were provided a 2-week standard diet, then 4 weeks of a VLCD (20 grams of carbohydrates/day). A responder was defined as having adequate relief (AR) of gastrointestinal symptoms for 2 or more weeks during the VLCD. Changes in abdominal pain, stool habits, and quality of life (QOL) were also measured. Results

Of the 17 participants enrolled, 13 completed the study and all met the responder definition, with 10 (77%) reporting AR for all 4 VLCD weeks. Stool frequency decreased (2.6 ± 0.8/day to 1.4 ± 0.6/day; p<0.001). Stool consistency improved from diarrheal to normal form (Bristol Stool Score: 5.3 ± 0.7 to 3.8 ± 1.2; p<0.001). Pain scores and QOL measures significantly improved. Outcomes were independent of weight loss. Conclusion

A VLCD provides adequate relief, and improves abdominal pain, stool habits, and quality of life in IBS-D."

[9]

Evidence type: review

Suma Magge, MD and Anthony Lembo, MDcorresponding author
Gastroenterol Hepatol (N Y). 2012 Nov; 8(11): 739–745.

"Summary

A low-FODMAP diet appears to be effective for treatment of at least a subset of patients with IBS. FODMAPs likely induce symptoms in IBS patients due to luminal distention and visceral hypersensitivity. Whenever possible, implementation of a low-FODMAP diet should be done with the help of an experienced dietician. More research is needed to determine which patients can benefit from a low-FODMAP diet and to quantify the FODMAP content of various foods, which will help patients follow this diet effectively."

Thursday, 14 May 2015

Ornish Diet Worsens Heart Disease Risk: Part I

Dr. Dean Ornish has come under a lot of criticism lately for his misleading statements about diet and heart disease. See, for example: Critique of Dean Ornish Op-ed, by Nina Teicholz, and Why Almost Everything Dean Ornish Says about Nutrition Is Wrong, from Scientific American.

Ornish made his name with a study that claimed to actually reverse heart disease [1]. There are at least three problems with the study.

First, it included several confounders to the dietary regimen. For example, the intervention groups spent an hour a day on stress management techniques, such as meditation, and three hours a week exercising.

Second, although it was touted as the first study to look at "actual" heart disease results, it made no measurements of cardiac events! Instead, it was based on measuring stenosis — the degree of narrowing of coronary arteries. Considering that stenosis is only a predictor of cardiac events, it seems disingenuous to call it a direct measure of heart disease.

Stenosis is used to predict heart disease (though it is often not the previously found blockages that are ultimate culprits [2]). However, the measurement has a lot of variability. Because of this, differences in measurements over time need to be quite large to be showing a true progression or regression, and not just error. We found three studies attempting to pinpoint the minimum difference in measurements to make such a claim. They respectively recommended 15%, 9.3%, and 7.8% as a basis for this judgment [3], [4], [5].

So how much reduction of stenosis was there in Ornish's study?

"The average percentage diameter stenosis decreased from 40.0 (SD 16.9)% to 37.8 (16.5)% in the experimental group yet progressed from 42.7 (15.5)% to 46.11 (18.5)% in the control group (p = 0.001, two-tailed)."

That's the extent of the success in a year: a -2.2% change for the claim of "regression" vs. a 3.4% change for the claim of "progression". It does not reach a level of significance given the measurement tool.

Fortunately, there were other measurements taken that are also predictors of cardiac events: blood lipids. Even the AHA, an association that changes its mind slowly in response to evidence, considers triglycerides above 100 to be higher than optimal [6]. Low HDL is a strong marker of heart disease, with HDL below 40 considered by the AHA a "major heart disease risk factor" [7]. The intervention group went from an average triglyceride level of 211 to 258, and their HDL from 39 to 38. This shows that the intervention actually worsened the participants' risk factors!

Moreover, although not acknowledged by the AHA, we know that the ratio of triglycerides to HDL is a very strong predictor of heart disease; among the best [8]. A triglyceride-to-HDL level of less than 2 is considered ideal. Over 4 is considered risky. Over 6 is considered very high risk. The intervention group's average triglycerides-to-HDL ratio leapt from 5.4 to 6.8! It went from bad to worse. Thus, the third problem with the study is that it actually showed a worsening of heart disease by other important measures.

The bottom line is that Ornish's study never showed what it claimed to show.

After a year of intervention, even with other lifestyle changes incorporated, the subjects on his diet had a higher risk of heart disease than before they started.


References

[1]Ornish, Dean, et al. "Can lifestyle changes reverse coronary heart disease?: The Lifestyle Heart Trial." The Lancet 336.8708 (1990): 129-133.
[2]

Eveidence type: experiment

Little WC, Constantinescu M, Applegate RJ, Kutcher MA, Burrows MT, Kahl FR, Santamore WP.
Circulation. 1988 Nov;78(5 Pt 1):1157-66.

Abstract

To help determine if coronary angiography can predict the site of a future coronary occlusion that will produce a myocardial infarction, the coronary angiograms of 42 consecutive patients who had undergone coronary angiography both before and up to a month after suffering an acute myocardial infarction were evaluated. Twenty-nine patients had a newly occluded coronary artery. Twenty-five of these 29 patients had at least one artery with a greater than 50% stenosis on the initial angiogram. However, in 19 of 29 (66%) patients, the artery that subsequently occluded had less than a 50% stenosis on the first angiogram, and in 28 of 29 (97%), the stenosis was less than 70%. In every patient, at least some irregularity of the coronary wall was present on the first angiogram at the site of the subsequent coronary obstruction. In only 10 of the 29 (34%) did the infarction occur due to occlusion of the artery that previously contained the most severe stenosis. Furthermore, no correlation existed between the severity of the initial coronary stenosis and the time from the first catheterization until the infarction (r2 = 0.0005, p = NS). These data suggest that assessment of the angiographic severity of coronary stenosis may be inadequate to accurately predict the time or location of a subsequent coronary occlusion that will produce a myocardial infarction.

[3]

Evidence type: experiment

Abstract

BACKGROUND:

Clinical trials with angiographic end points have been used to assess whether interventions influence the evolution of coronary atherosclerosis because sample size requirements are much smaller than for trials with hard clinical end points. Further studies of the variability of the computer-assisted quantitative measurement techniques used in such studies would be useful to establish better standardized criteria for defining significant change.

METHODS AND RESULTS:

In 21 patients who had two arteriograms 3-189 days apart, we assessed the reproducibility of repeat quantitative measurements of 54 target lesions under four conditions: 1) same film, same frame; 2) same film, different frame; 3) same view from films obtained within 1 month; and 4) same view from films 1-6 months apart. Quantitative measurements of 2,544 stenoses were also compared with an experienced radiologist's interpretation. The standard deviation of repeat measurements of minimum diameter from the same frame was very low (0.088 mm) but increased to 0.141 mm for measurements from different frames. It did not increase further for films within 1 month but increased to 0.197 mm for films 1-6 months apart. Diameter stenosis measurements were somewhat more variable. Measurement variability for minimum diameter was independent of vessel size and stenosis severity. Experienced radiologists did not systematically overestimate or underestimate lesion severity except for mild overestimation (mean 3.3%) for stenoses > or = 70%. However, the variability between visual and quantitative measurements was two to three times higher than the variability of paired quantitative measurements from the same frame.

CONCLUSIONS:

Changes of 0.4 mm or more for minimum diameter and 15% or more for stenosis diameter (e.g., 30-45%), measured quantitatively, are recommended as criteria to define progression and regression. Approaches to data analysis for coronary arteriographic trials are discussed.

[4]

Evidence type: experiment

Brown BG1, Hillger LA, Lewis C, Zhao XQ, Sacco D, Bisson B, Fisher L.
Circulation. 1993 Mar;87(3 Suppl):II66-73.

Abstract

BACKGROUND:

Imaging trials using arteriography have been shown to be effective alternatives to clinical end point studies of atherosclerotic vascular disease progression and the effect of therapy on it. However, lack of consensus on what end point measures constitute meaningful change presents a problem for quantitative coronary arteriographic (QCA) approaches. Furthermore, standardized approaches to QCA studies have yet to be established. To address these issues, two different arteriographic approaches were compared in a clinical trial, and the degree of concordance between disease change measured by these two approaches and clinical outcomes was assessed.

METHODS AND RESULTS:

In the Familial Atherosclerosis Treatment Study (FATS) of three different lipid-lowering strategies in 120 patients, disease progression/regression was assessed by two arteriographic approaches: QCA and a semiquantitative visual approach (SQ-VIS). Lesions classified with SQ-VIS as "not," "possibly," or "definitely" changed were measured by QCA to change by 10% stenosis in 0.3%, 11%, and 81% of cases, respectively. The "best" measured value for distinguishing definite from no change was identified as 9.3% stenosis by logistic regression analysis. The primary outcome analysis of the FATS trial, using a continuous variable estimate of percent stenosis change, gave almost the same favorable result whether by QCA or SQ-VIS.

CONCLUSIONS:

The excellent agreement between these two fundamentally different methods of disease change assessment and the concordance between disease change and clinical outcomes greatly strengthens confidence both in these measurement techniques and in the overall findings of the study. These observations have important implications for the design of clinical trials with arteriographic end points.

[5]

Evidence type: experiment

Gibson CM1, Sandor T, Stone PH, Pasternak RC, Rosner B, Sacks FM.
Am J Cardiol. 1992 May 15;69(16):1286-90.

Abstract

The purpose of this study was (1) to determine a threshold for categorizing individual coronary lesions as either significantly progressing or regressing, (2) to determine whether multiple lesions within individual patients progress at independent rates, and (3) to calculate sample sizes for atherosclerosis regression trials. Seventeen patients with 46 significant lesions (2.7 lesions/patient) underwent repeat coronary arteriography 3.0 years apart. With use of the standard error of the mean change in diameter from initial to repeat catheterization across 5 pairs of consecutive end-diastolic frames, individual lesions were categorized as either significantly (p less than 0.01) progressing or regressing if there was a 0.27 mm change in minimum diameter or a 7.8 percent point change in percent stenosis. The mean diameter change of a sample of lesions can also be analyzed as a continuous variable using either the lesions or the patient as the primary unit of analysis. A lesion-specific analysis can be accomplished using a multiple regression model that accounts for the intraclass correlation (rho) in the degree of change among multiple lesions within individual patients. The intraclass correlations in percent stenosis (rho = 0.01) and minimum diameter (rho = -0.24) were low, indicating that disease progression in different lesions within individual patients is nearly independent. With use of this model, 50 patients per treatment group would permit the detection of a 5.5% difference between treatment group means in the change in minimum diameter and a 2.7% percentage point (not percent) difference in the change in percent stenosis.(ABSTRACT TRUNCATED AT 250 WORDS)

[6]

From The American Heart Association's "Scientific Statement"

"New clinical recommendations include reducing the optimal triglyceride level from <150 mg/dL to <100 mg/dL, and performing non-fasting triglyceride testing as an initial screen."

[7]

From Levels of Cholesterol

Less than 40 mg/dL for men; less than 50 mg/dL for women: Major heart disease risk factor

60 mg/dL or higher Gives some protection against heart disease

[8]

Evidence type: observational

Gaziano JM1, Hennekens CH, O'Donnell CJ, Breslow JL, Buring JE.
Circulation. 1997 Oct 21;96(8):2520-5.

Abstract

BACKGROUND:

Recent data suggest that triglyceride-rich lipoproteins may play a role in atherogenesis. However, whether triglycerides, as a marker for these lipoproteins, represent an independent risk factor for coronary heart disease remains unclear, despite extensive research. Several methodological issues have limited the interpretability of the existing data.

METHODS AND RESULTS:

We examined the interrelationships of fasting triglycerides, other lipid parameters, and nonlipid risk factors with risk of myocardial infarction among 340 cases and an equal number of age-, sex-, and community-matched control subjects. Cases were men or women of <76 years of age with no prior history of coronary disease who were discharged from one of six Boston area hospitals with the diagnosis of a confirmed myocardial infarction. In crude analyses, we observed a significant association of elevated fasting triglycerides with risk of myocardial infarction (relative risk [RR] in the highest compared with the lowest quartile=6.8; 95% confidence interval [CI]=3.8 to 12.1; P for trend <.001). Results were not materially altered after control for nonlipid coronary risk factors. As expected, the relationship was attenuated after adjustment for HDL but remained statistically significant (RR in the highest quartile=2.7; 95% confidence interval [CI]=1.4 to 5.5; P for trend=.016). Furthermore, the ratio of triglycerides to HDL was a strong predictor of myocardial infarction (RR in the highest compared with the lowest quartile=16.0; 95% CI=7.7 to 33.1; P for trend <.001).

CONCLUSIONS:

Our data indicate that fasting triglycerides, as a marker for triglyceride-rich lipoproteins, may provide valuable information about the atherogenic potential of the lipoprotein profile, particularly when considered in context of HDL levels.

Male Depression Risk Via Childhood Conduct Disorder

Conduct disorder represents an important childhood-onset condition that commonly persists into adulthood.

Adult antisocial personality disorder and substance abuse are known risks associated with conduct disorder.


A recent study by Kenneth Kendler and Charles Gardner identified male conduct disorder as a risk factor for adult major depression.

Their study using the Virginia Twin Registry examined 20 developmental risk factors in male and female twins for presence of recent adult major depression.

A key finding in their study was gender specificity for several of the developmental risk factors. Many of the developmental risk factors increased risk for later depression in both males and females.

However, several developmental risk factors showed a predominant effect in males. These male predominant risk factors included the following variables:

  • Conduct disorder
  • History of childhood sexual abuse
  • Drug abuse
  • Past major depression
  • Stressful life events

Conduct disorder and presence of drug abuse were classified as having moderate effect size in male gender predominance.

Specific types of stressful life events were noted to have a strong male predominance. Stressful life events that included financial loss, occupational difficulty and legal problems were more commonly found in the male twins with depression.

The authors note:
"Our results with externalizing psychopathology are consistent with a wide range of studies finding that men have higher rates of conduct disorder and drug abuse and that both of these disorders are associated with a higher risk for major depression."
The take home message for clinicians is that assessment of childhood conduct disorder is important in children, adolescents and adults. In adult males, childhood conduct disorder represents an important risk factor for adult major depression.

Readers with more interest in this research can access the free full-text abstract and manuscript by clicking on the DOI link in the citation below.

Photo of electus parrot pair is from the author's files.

Follow the author on Twitter @WRY999

Kendler, K., & Gardner, C. (2014). Sex Differences in the Pathways to Major Depression: A Study of Opposite-Sex Twin Pairs American Journal of Psychiatry, 171 (4), 426-435 DOI: 10.1176/appi.ajp.2013.13101375

Wednesday, 13 May 2015

Conduct Disorder Research Links: II

Here are some additional links to important recent research in conduct disorder.

Click on the title to be directed to the abstract. Many of the abstracts also have links to the free full-text manuscripts.




This twin study examined the correlates of childhood versus adolescent onset conduct disorder. Both types showed a strong genetic influence (62% and 65%). Childhood onset CD was strongly linked to adult antisocial behavior while adolescent onset CD was not. A specific genetic factor was felt to contribute to a combined CD/ADHD phenotype.




Twenty developmental risk factors were examined for contribution to later major depression in a series of opposite-sex twins. The authors identified conduct disorder as a stronger predictor of depression in males. Financial, legal and occupational life events also uniquely contributed a strong influence for depression risk in men.




This study looked at an evoked brain wave potential (known as the P300) as a biomarker for conduct disorder risk. Reduced P300 amplitude was linked to conduct disorder symptoms. Both reduced P300 and conduct disorder appeared related to a genetic contribution.




A group of adults adolescents with externalizing disorders were compared to a control group using MRI and diffusion tensor imaging. Disruptive behavior disorder was linked to delayed white matter maturation in the corpus callosum and superior longitudinal fasciculus.




This study found a reduced cortical volume and cortical surface area in the brain prefrontal cortex of adolescents with a conduct disorder diagnosis. They note these findings may help identify neurodevelopmental mechanisms in the disorder.



Kindergartners in this study were screened for conduct disorder and randomly assigned to a 10-year intervention versus control. At 25 years of age, the intervention group had a lower rate of and psychiatric disorder 59% versus 69%. Intervention subjects also had lower scores on violent and drug crime behavior. Another study using this cohort found lower scores on adolescent conduct disorder and oppositional defiant disorder compared to controls.


Photo of entry to Dingle Bay in Ireland is from the author's files.

Follow the author on Twitter @WRY999.

Thursday, 7 May 2015

Conduct Disorder: Predictors, Gender and Genetics

Genetic factors contribute to risk for many childhood mental disorders.

Gender issues in childhood psychopathology are also important factors.

Boys show higher rates for conduct disorder (CD), oppositional defiant disorder (ODD) and attention deficit hyperactivity disorder (ADHD).

Nora Kerekes and colleagues in Sweden and Australia examined a large twin study of childhood behavioral and neurobehavioral disorders. The aims of this study were to better understand the developmental and genetic features with attention to gender issues.

Key features of design of this study included:

  • Twin status identical (monozygotic-MZ) vs fraternal (dizygotic-DZ) was assigned via algorithm and saliva samples
  • Clinical features were assessed using the Autism-Tics, AD/HD and other Comorbidities inventory
  • Data analysis included descriptive statistics and twin gene-environment modelling

As expected, ODD and CD rates were higher in boy twins than in girl twins. For ODD the boy:girl prevalence rates were 3.5%:2.1%. For conduct disorder the boy:girl prevalence rates were 1.3%:0.6%.

The research team found two distinct neurodevelopmental predictors. Early concentration/attention problems led to an increase in later ODD in both boys and girls. Additionally, early social interaction problems was linked to increased CD rates in both genders.

Higher monozygotic versus dizygotic twin concordance rates indicate a significant genetic contribution. The original figure above shows the MZ v DZ rates for the boy twins abstracted from the manuscript. All three behavioral disorders as well as autism spectrum disorder (ASD) showed significant genetic contributions in the boy twins.

For girl twins a genetic contribution was identified for ODD, ADHD and ASD. Of note, CD rates were not increased in MZ girl twins compared to DZ girl twins.

The authors note their findings have clinical implications including:

  • Clinicians need to be aware of the differences between boys and girls with ODD and CD
  • Inattention problems are important but less evident that behavioral problems. Clinicians should be diligent in looking for attentional problems in girls as a risk for later CD
  • Social interaction problems are common predictors and common comorbid features in ODD and CD
  • Treatment of ODD and CD in both boys and girls should include family interventions with multimodal interventions targeting improvement in "social interaction and communication abilities".

This is an important study that emphasizes a comprehensive surveillance and assessment program for behavioral problems in both boys and girls.

Readers with more interest in this study can access the free full-text manuscript by clicking on the DOI link in the citation below.

Follow the author on Twitter @WRY999

Kerekes, N., Lundström, S., Chang, Z., Tajnia, A., Jern, P., Lichtenstein, P., Nilsson, T., & Anckarsäter, H. (2014). Oppositional defiant- and conduct disorder-like problems: neurodevelopmental predictors and genetic background in boys and girls, in a nationwide twin study PeerJ, 2 DOI: 10.7717/peerj.359