Readers’ Replies to prior post on Technical Analysis (“TA”) found here: http://wp.me/p2OaYY-2ib
#1 Here is one rich technician. Paul Tudor Jones. Nuff said. Lowery research has been in business a long time doing Technical Analysis. Tom Demark. Look him up.
#2 Personally I think people should use Fundamentals for investing in anything for the Long Term. Technical Analysis has a purpose but usually only for the immediate future. That’s why most Day Traders use Technical Analysis.
For instance, if you watch the Moving Averages and say that the 200-day Moving Average (200MA) falls below a 50-day moving average (50MA), this is known as a death cross and 9 times out of 10 that I have seen that kind of action, the price on that chart will start to go down.
These Are John Buckingham’s Stock Picks For 2021
The economy remains in distress, although there are signs of recovery underway. John Buckingham of Kovitz, editor of The Prudent Speculator newsletter, has found that value stocks typically outperform coming out of economic downturns. Thus, he argues that this is an excellent time to be a value investor. Q4 2020 hedge fund letters, conferences and Read More
Technical Analysis is mostly used for short-term movement in a stock, commodity, currency, etc., etc. It is virtually impossible to base a long-term investment on Technical Analysis.
My response: Thanks, but those opinions don’t improve our knowledge about whether Technical Analysis is a usable tool. Take #1, Paul Tudor Jones is a big, successful hedgie who uses TA–enough said. Let’s substitute TA for dresses in drag and flips coins. The meaning would be the same. I don’t want to pick on anyone, I read the same in many articles on Tudor Jones. I bet you Tudor Jones couldn’t even tell you EXACTLY how he uses charts. He probably blends many factors into his “sixth-sense” based on thousands of hours of intensive interaction with the markets.See page one: 04_Jul_-_Tudor_Inv_Corp where Tudor loves the dollar and then the next day he is short the dollar. Did a chart give him a signal? If so, what is the STATISTICAL EVIDENCE?
The point is, there is and can never be any statistical evidence since charts just reflect PAST human choices of buying and selling. Future human action can’t be mathematically proscribed.
Also, technical analysis has both passionate critics and ardent adherents. For example, an October 2009 study by New Zealand’s Massey University found that of more than 5,000 strategies that employ technical analysis, none produced returns in the 49 countries where researchers tested the strategies beyond what you’d expect by chance. However, scores of traders, including billionaire Paul Tudor Jones, say the discipline helped them amass great fortunes. So I tried to keep an open mind. (If Paul Tudor Jones is a billionaire, then technical analysis must work! Flawed logic!) Read more at http://www.kiplinger.com/article/investing/T052-C000-S002-our-man-goes-undercover-and-tells-all.html#YYaXPGbZyfPmbyFp.99
#2 If you have evidence that 9 out of 10 times the “Death Cross” moves prices enough for you to take advantage of them–then great for you. But again, if this “signal” did work, why wouldn’t the market DISCOUNT it in the future especially if you could precisely define what a Death Cross is?
There are some money managers who use technical analysis in creative ways for the long-term but I call them market mystics.
What I AM saying is to use TA if it works for you however you define “works for you” be it in confidence, money management and setting risk parameters, finding opportunities, etc. but don’t fool yourself. THERE IS NO SCIENTIFIC EVIDENCE THAT TA HAS ANY EFFICACY.
I will make a $1,000 bet. Show me any statistical proof or long-term (fifiteen years or more) of market beating returns solely using TA. Ask these guys: http://www.tradingacademy.com/about-us/. I guess SELLING TA is more profitable than USING it. I smell a legal high-pressure selling scam: http://www.ripoffreport.com/r/online-trading-academy-boston/norwood-massachusetts-/online-trading-academy-boston-ota-watch-out-for-this-high-pressure-tactical-manipulatio-888094
Why the strange picture at the top of this post? This NY Times’ article by Gary Taubes shows how difficult it is to obtain scientific proof for even life threatening health issues.
NEARLY six weeks into the 2014 diet season, it’s a good bet that many of us who made New Year’s resolutions to lose weight have already peaked. If clinical trials are any indication, we’ve lost much of the weight we can expect to lose. In a year or two we’ll be back within half a dozen pounds of where we are today.
The question is why. Is this a failure of willpower or of technique? Was our chosen dietary intervention — whether from the latest best-selling diet book or merely a concerted attempt to eat less and exercise more — doomed to failure? Considering that obesity and its related diseases — most notably,Type 2 diabetes — now cost the health care system more than $1 billion per day, it’s not hyperbolic to suggest that the health of the nation may depend on which is the correct answer.
Since the 1960s, nutrition science has been dominated by two conflicting observations. One is that we know how to eat healthy and maintain a healthy weight. The other is that the rapidly increasing rates of obesity and diabetes suggest that something about the conventional thinking is simply wrong.
In 1960, fewer than 13 percent of Americans were obese, and diabetes had been diagnosed in 1 percent. Today, the percentage of obese Americans has almost tripled; the percentage of Americans with diabetes has increased seven-fold.
Meanwhile, the research literature on obesity has also ballooned. In 1960, fewer than 1,100 articles were published on obesity or diabetes in the indexed medical literature. Last year it was more than 44,000. In total, over 600,000 articles have been published purporting to convey some meaningful information on these conditions.
It would be nice to think that this deluge of research has brought clarity to the issue. The trend data argue otherwise. If we understand these disorders so well, why have we failed so miserably to prevent them? The conventional explanation is that this is the manifestation of an unfortunate reality: Type 2 diabetes is caused or exacerbated by obesity, and obesity is a complex, intractable disorder. The more we learn, the more we need to know.
Here’s another possibility: The 600,000 articles — along with several tens of thousands of diet books — are the noise generated by a dysfunctional research establishment. Because the nutrition research community has failed to establish reliable, unambiguous knowledge about the environmental triggers of obesity and diabetes, it has opened the door to a diversity of opinions on the subject, of hypotheses about cause, cure and prevention, many of which cannot be refuted by the existing evidence. Everyone has a theory. The evidence doesn’t exist to say unequivocally who’s wrong.
The situation is understandable; it’s a learning experience in the limits of science. The protocol of science is the process of hypothesis and test. This three-word phrase, though, does not do it justice. The philosopher Karl Popper did when he described “the method of science as the method of bold conjectures and ingenious and severe attempts to refute them.”
In nutrition, the hypotheses are speculations about what foods or dietary patterns help or hinder our pursuit of a long and healthy life. The ingenious and severe attempts to refute the hypotheses are the experimental tests — the clinical trials and, to be specific, randomized controlled trials. Because the hypotheses are ultimately about what happens to us over decades, meaningful trials are prohibitively expensive and exceedingly difficult. It means convincing thousands of people to change what they eat for years to decades. Eventually enough heart attacks, cancers and deaths have to happen among the subjects so it can be established whether the dietary intervention was beneficial or detrimental.
And before any of this can even be attempted, someone’s got to pay for it. Since no pharmaceutical company stands to benefit, prospective sources are limited, particularly when we insist the answers are already known. Without such trials, though, we’re only guessing whether we know the truth.
Back in the 1960s, when researchers first took seriously the idea that dietary fat caused heart disease, they acknowledged that such trials were necessary and studied the feasibility for years. Eventually the leadership at the National Institutes of Health concluded that the trials would be too expensive — perhaps a billion dollars — and might get the wrong answer anyway. They might botch the study and never know it. They certainly couldn’t afford to do two such studies, even though replication is a core principle of the scientific method. Since then, advice to restrict fat or avoid saturated fat has been based on suppositions about what would have happened had such trials been done, not on the studies themselves.
Nutritionists have adjusted to this reality by accepting a lower standard of evidence on what they’ll believe to be true. They do experiments with laboratory animals, for instance, following them for the better part of the animal’s lifetime — a year or two in rodents, say — and assume or at least hope that the results apply to humans. And maybe they do, but we can’t know for sure without doing the human experiments.
They do experiments on humans — the species of interest — for days or weeks or even a year or two and then assume that the results apply to decades. And maybe they do, but we can’t know for sure. That’s a hypothesis, and it must be tested.
And they do what are called observational studies, observing populations for decades, documenting what people eat and what illnesses beset them, and then assume that the associations they observe between diet and disease are indeed causal — that if people who eat copious vegetables, for instance, live longer than those who don’t, it’s the vegetables that cause the effect of a longer life. And maybe they do, but there’s no way to know without experimental trials to test that hypothesis.
The associations that emerge from these studies used to be known as “hypothesis-generating data,” based on the fact that an association tells us only that two things changed together in time, not that one caused the other. So associations generate hypotheses of causality that then have to be tested. But this hypothesis-generating caveat has been dropped over the years as researchers studying nutrition have decided that this is the best they can do.
One lesson of science, though, is that if the best you can do isn’t good enough to establish reliable knowledge, first acknowledge it — relentless honesty about what can and cannot be extrapolated from data is another core principle of science — and then do more, or do something else. As it is, we have a field of sort-of-science in which hypotheses are treated as facts because they’re too hard or expensive to test, and there are so many hypotheses that what journalists like to call “leading authorities” disagree with one another daily.
It’s an unacceptable situation. Obesity and diabetes are epidemic, and yet the only relevant fact on which relatively unambiguous data exist to support a consensus is that most of us are surely eating too much of something. (My vote is sugars and refined grains; we all have our biases.) Making meaningful inroads against obesity and diabetes on a population level requires that we know how to treat and prevent it on an individual level. We’re going to have to stop believing we know the answer, and challenge ourselves to come up with trials that do a better job of testing our beliefs.
Before I, for one, make another dietary resolution, I’d like to know that what I believe I know about a healthy diet is really so. Is that too much to ask?
Gary Taubes is a health and science journalist and co-founder of the Nutrition Science Initiative.