Abuse and NeglectPsychopathic BehaviorTactic to avoid ChangeThinking Errors

The Narcissistic Enemies of Research into Excessive Non-Maternal Daycare

An Example of Narcissistic Research Justifying Excessive Daycare

Parents looking to make an educated decision on how much daycare to use are often pulled in many directions when they review research.  Some articles confidently discuss the harm of daycare while others tout the benefits of daycare.  How to sort out two opposite claims?  A lot of the research is of very low quality. But due to the pretentiousness of those that are offering acceptance by other researchers parents don’t know what decision to make. 

The situation is made even worse by researchers who are agenda based and are willing to outright lie to advance their pet causes.  These pet causes can be informed by deeply anti-social values and expectations, or it can be as simple as advancing one’s career.  Researchers recently discovered that a lot of research into Alzheimer’s was faked. All to advance the researchers career and the harm he caused did not matter to him.

This lends to the state of modern research being in absolute shambles while the participants put on a face like the system is still working.  Childish people find one study they like then claim that they “trust the science” as they attack those around them that take a more mature and holistic approach to research.  This section explains why people oppose responsible research and do a overview of a widly accepted study that supports daycare that was done in a reprehensible and deplorable way.

Identifying good or shoddily written or designed research

One of the most important thing that separates good research versus poor research is a proper explanation of the limitations of the research.  This also cascades into a discussion about what methods researchers used and why.  There is always trade off that must occur given financial budgets and time constraints.  In other words, a research project on a limited budget but that is very honest and thorough in discussing its limitations is a much better research project than one with many times the budget done by researchers with a narcissistic agenda that undermine its integrity.

Good examples

The original research into non-maternal daycare mostly comprised of university researchers that schlepped down to the university provided daycare and they described what they saw from the children.  They understood that a certain quality of care was provided that may be higher than in the community based on the budget and staff available to the university and so they could really focus on the time component of how much daycare was too much.

Despite the limitations of this research the researchers were honest about it.  They did not over-generalize their findings.  They performed only the descriptive statistics such a limited research design could support.  Later researchers would do blood draws for children in various types of daycare to track their stress hormones. They found through meta-analysis (which has its own limitations) that stress in childrne goes up after so much time way from the parents (separation anxiety). This increase in stress occurs even if kids were not breaking down emotinally or acting out. The rise in stress hormones is worse for children in low quality daycare.  Very solid research designs were used.  I could go on.

Bad Example

I selected this as bad example of research due to how widely news outlets touted it. Secondly, it was important to select an article that was available for free. Early Child Care and Adolescent Functioning at the End of High School: Results from the NICHD Study of Early Child Care and Youth Development (europepmc.org)

My list of criticism is many.  I shall limit myself to the most flagrant violations. 

Criticism 1: Bad Underling Assumptions

A lot of researchers improperly biased their projects by assuming the college track is the premier track for high school students.  This research I am criticizing does the same. It assumes that wanting to go to college is a good thing in itself. There could be good or bad reasons one wants to go to college. The research also does not measure other beneficial life-plans for highschoolers as valid as attending a prestigious college.  What is wrong with wanting to go to a trade school, being a home maker, joining the military, or getting a job right out of high school with a good health care and retirement plan?  By focusing on college-centric outcomes and not other valid forms of success this research narcissistically devalues those other avenues of success.

I assume it’s a bias that over-educated people want to overeducate others.  They cannot see pass their pretentions and probably do not care how condescending they appear.  There are plenty of high achieving people without college degrees and low functioning people with college degrees.  With that in mind judging the success or failure of parents based on if their children go to college is snobbish and elitist.  It would be much better to judge parents on their ability to raise children with ACES of Zero or for the parents to work on any anti-social values and behaviors they have.

It is not uncommon to read a research paper about the affects of day care or after school activities and they report for academic success but don’t report on vocational success like wanting to enter a trade school or joining the military.  Their narrow-minded pretentiousness stunts their analysis[2] because they assume well adjusted people want to go to college when that may not be the case. 

Considering that the mental health of college students is getting worse we could argue that many people are successful despite going to college rather than because they went to college. Mental Health of College Students Is Getting Worse | The Brink | Boston University (bu.edu). What kind of well adjusted person wants to rack up college debt and damage their mental health when they could be just as or more emotionally, financially, and mentally successful with a different path in life?

Give the depressing reality of the college experience and debt wanting to go to college could just as easily be coded as a negative outcome as a positive outcome compared to the alternatives.

There are lots of professions with above average suicide rates that require a college or advance degree. Top 11 Professions with Highest Suicide Rates – Mental Health Daily.  Encouraging people to enter these fields or researchers coding entries into this field positively without some very serious qualifying statements is pretty immoral.  You want to know what was not on the list that the study didn’t code for? Home school moms.  Does that mean anything?  Without the proper research design there is no knowing.

Criticism 2: Survey Data Response Rate

The mailed survey is one the worst forms of data collection. Researchers still widely uses surveys due to how cheap and convenient it is.  You run into many problems, such as poor response rate and an inability to qualitatively assess the person for dishonesty.[1]

Dishonesty aside the most important things to consider when using a mailed survey is the response rate. The longer the research goes on and the more frequent you ask someone to respond the more people stop responding.  Respondents become get fatigued.  When the response rate is low, the lack of data invalidates the conclusions made from the data. One of the pitfalls researchers face is making confident assumptions why people stopped responding to their survey.

Researchers have known the importance of response rate to longitude data for a long time.  Quite frankly doing rigorous analysis with this data is a farce. This is because of the astoundly poor response rate of about 65% for people enrolled in the study.

[T]he difficulty of making valid statistical inferences in the face of missing data will continue to plague researchers. In an ideal situation, all potential survey participants would respond; in reality, the goal of an 80 to 90% response rate is very difficult to achieve. When nonresponse is systematic, the combination of low response rate and systematic differences can severely bias inferences that are made by the researcher to the population …. A convenient sample lacks the statistical properties of a probability sample that allow the validity of its inferences to be assessed strictly from a mathematical framework (emphasis added).

https://pubmed.ncbi.nlm.nih.gov/10162904/

In other words, a low response rate means that every conclusion a researcher makes is at severe risk of being biased. This would be true if did a probability sample instead of a convivence sample.

It gets even worse. Out of all the families she approached in the hospital only 52% were available a month later.  So, she has a response rate of 65% of the 52%, that were available a month later (or 1/3rd) participate the full length of the survey. She has no knowledge of why people discontinued responding to the survey. Researchers cannot make inferences that A is related to B under those conditions. Without the ability to say things are related you cannot say one causes or influences the other. You should not conclude that after school care or pre-school is related to, or causes, anything. This includes the successful outcomes the researcher claims.

Criticism 3: Convenience Sample

In order to do advanced statistics, researchers have to make some effort to control for a wide variety of factors.  Researchers of integrity put a lot of effort into creating their sample methodology to remove their personal bias. If the researcher has time or budgetary constraints, they are honest about those.

But the researcher in question did not do an outstanding job of creating her sample.  Remember, many of the historic research into daycare outcomes admitted that they just schlepped down to the university daycare.  They admitted the limitations of their research and did not do advanced statistics on their data.

The researchers for the study we are criticizing schlepped into ten city hospitals within the United States for one day and screened parents of newborns for eligibility into the program.  They did not make the screening criteria readily available. They didn’t explain why how they selected the cities.  The researchers should have clearly articulated how and why they chose the cities. Likewise, the researchers should have made the screening criteria more readily available.

Simply, they used a convenience sample that lack the required properties to support their regression methods and therefore their conclusion. The researchers created this study with an invalid design.

Criticism 4: Data Manipulation

Sometimes you want to do some data analysis but you have missing data.  Missing data means that you cannot make reliable or generalizable conclusions from your data. You cannot say A is related to B and without showing a relationship you cannot go one step further and say A causes B.

What does the immoral person do you may think?  You may have heard the famous quote below:

The immoral researcher just makes up data. Researchers call one of the reasons they make up data “P-Hacking.” Quite simply that is fudging the data to make conclusions that “A is related to B” that are unjustifiable. 

  • If P values are high; then you cannot conclude A is related to B
  • If P values are low; then you can conclude that A is related to B
  • You cannot conclude A causes B without showing that A is related to B with an appropriately low P value.

P-Hacking is doing the data “manipulation” required to get lower P-values to make conclusions.

Immoral researchers make connections and conclusions that are not really there. They submit their articles for peer review knowing most reviewers will never ask to see their data set survey instruments.  For more see here: What Is P-Hacking & How To Avoid It? (analyticsindiamag.com)

Our researcher did p-hacking and was even open about it and no one called her out.  Remember, it is easier to go along with a narcissist lie than to confront them.  She did something called “imputing the data.”  She had a lot of missing data due to poor response rates. So she found a way to make up for the missing data.  This imputation allows her to have more data points which “improves” her p-value. This improved P-value allows her to make relationship appear more certain than they are.

A research meme about manipulating data

She was honest about how she p-hacked her own data.

The imputation model included the three child care variables; research site; the child and family covariates from early childhood, middle childhood, and adolescence; and the seven outcomes shown in Table 1, as well as 54-month and 15-year academic and social child outcomes to enhance imputation of missing childcare and end-of-high school variable, respectively.

Early Child Care and Adolescent Functioning at the End of High School: Results from the NICHD Study of Early Child Care and Youth Development (europepmc.org) p.8

Due to the state of modern research and peer review being in shambles this was not a barrier to publication.  There are certain data manipulations you can do if your data is “missing at random.”  Ideally, the method section of a paper describes how the researcher concluded the data is missing at random. This includes addressing potential reasons why the data could not be missing at random and how the research design attempted to control data that is not missing at random. Our researcher did not do this.

For a similar example, imagine you did a study of drug users that did a wide variety of drugs: polysubstance abusers over the course of 18 years.  You find out that over the 18-year survey 1/3rd stopped responding.  You then do some data manipulation that makes the average drop out resemble the average respondent.  I personally would think that most of the people that stopped responding were more likely to be low functioning addicts and had the most real-world problems. 

So, when you average out the missing data with existing data you shift it irresponsibly.  What kind of people would be able to stay in the survey in the long run?  The high functioning drug addicts.  With this imputation you make low functioning addicts that discontinued the survey look like high functioning addicts.

Leonardo Decaprio as the Wolf Of Wall-Street discusses his daily drug use.

Someone performing low quality research would assume that drug use is correlated with being a rich and high functioning person.  The researcher would go facedown into some cocaine trying to be a high functioning stock trader in order to live their best life.

As another example, imagine a 15 year study of fundamentalist terrorism groups declaring that over time the terrorists become less radical and violent over time.  There is a huge drop off in response rate but the researchers decide that is “missing at random.”  You might think that is ridiculous, the most radical of the bunch could be unavailable to respond for a host of reasons, being missing in action, killed in action,  on the run from the law, or imprisoned and unavailable to respond.  The remaining respondents were therefor the less radical to begin with or those that could not maintain a radical terroristic tempo and became milder with age.

As ridiculous as that sounds the research author was just as deserving of ridicule for assuming her data was missing at random.  Children potentially exposed to the daily adversity of excessive daycare could drop out of the survey at higher rates than children not exposed to that adversity.  The children exposed to high amounts of daycare would then be more resilient that the population that stop responding.

So when I read article after article praising a longitude study that praises “early care” (a misnomer in the first place) and afterschool care and I see how bad the research was designed and performed I have to remind myself that many people reading the material don’t know how badly they are being deceived. But I do file away in my head my distrust for the researcher and the institution sponsoring the research.

Criticism 5: Missing or Sloppy Variables

I could go on at length for this section. They asked the respondents to report their GPA as a range rather than just get the actually GPA. They also requested the students self-report their class rankings. It may have affected the response rate for researchers to a transcript, but it could have been done. Also, maybe one reason the data could not have been missing at random was all of the students with poor grades disproportionality stopped responding.

Improvements could have been made to the research’s selection of behavioral outcomes. The researchers could have asked about police contacts, detention, diversions, in school or out of school suspensions. But they did not.

Birth Order, total number of children or Sex

Unfortunately to many parents not all children are equal.  There are all sorts of factors that influence favoritism in families and birth order, sex, and total number of children play a clear role in favoritism. A single mother with strong affinity for her daughters could raise them completely different than her sons. The expectations for motivations prepare her daughters for college could be above and beyond her expectations for her sons, who she assumes going into the trades is OK. The researchers made no effort to control for birth order or total number of children in the family. 

Seasonality

There was also no effort to control for seasonality (time of the year the child was born) which as shown to have drastic importance in some long-term indicators of lifelong success and wellbeing.  Influence of seasonal birth in humans – Wikipedia

There is evidence that suggests that children who are born earlier while they attend the same academic year with others, gain an advantage:

“In Britain the academic year begins in September, and there may be almost a year’s chronological age difference between the eldest (September birthday) and youngest (August birthday) children in the same class. There is evidence that, in this context, children born in the autumn term (September to December birthdays) perform better academically, relative to their class peers, than those born in the spring term (January to April birthdays), who in turn outperform those born in the summer term (May to August birthdays).”[15]

In other words, since the researchers only pulled their sample from one day of the year they completely missed any seasonality outcomes form their report.  They simply could have gotten different results if they did their study in a different time of year.  This is also tied to the laziness of their convenience sampling.

Adverse Childhood Experiences Survey

There was no effort to control for the children’s Adverse Childhood Experiences.  That would be a major confounding factor for positive or negative outcomes.  For example, consider two families that could have occurred in the study. First, A firstborn child with no ACES. Second, a youngest child in a family with ACES related to divorce.  One could think of many reasons why the two would have different survey outcomes. The researchers took no effort to account for one of the most validated measures of adversity in children’s life that could confound their conclusions.  In a properly designed study, total ACE score would have been sought out. The Scores could be evaluated for synergistic affects combined with daycare.

Tagged:

Related Posts