Once Upon A Time In Wuhan . . .

“Complicit politicians and most media outlets have gone along with an engineered fabrication involving an invisible pathogen that has never been shown to exist.”

Jon Rappoport just published a great article that ties in nicely with this video.

Advertisement

They Could Have Done This With Any Cold

Bretigne Shaffer interview with Kevin McKernan

Kevin McKernan developed the SOLiD sequencer and worked on the Human Genome Project at MIT/WIBR

Important discussion of cycle threshold limits of SarsCov2 PCR tests. Running a test past the recommended cycle number produces a high number of false positive results. Cycle threshold should not be over 30, yet most test labs in the US and around the world set it to 37-40 and even as high as 45. At that point, the RNA is amplified so much that even an inactive, old viral fragment can yield a positive result—causing many people to quarantine unnecessarily.

In any given year, we could have done similar mass testing for any common cold and ended up with similar results. There is no need to lock everyone in their homes. This is destroying our society.

The best option is to stop mass testing. Do not submit to a test unless you are showing symptoms and your doctor recommends a test. Understand the ramifications of a positive test. You will need to quarantine and submit names of all close contacts who will also be forced to quarantine. It is unnecessary. Treat any illness — as we always have. If you become worse, seek medical attention. Most likely it will be no different than any past cold or flu.

Kevin McKernan twitter: https://twitter.com/Kevin_McKernan
Bretigne Shaffer website: https://bretigne.typepad.com/on_the_banks/

Are You Positive You Are ‘Positive’?

Sharing this article about unreliable PCR tests from https://lockdownsceptics.org/.

Government Innumeracy

12 September 2020. Updated 13 September 2020.by James Ferguson

Are you positive you are ‘positive’?

“When the facts change, I change my mind. What do you do sir?” – John Maynard Keynes

The UK has a big problem with the false positive rate (FPR) of its COVID-19 tests. The authorities acknowledge no FPR, so positive test results are not corrected for false positives and that is a big problem.

The standard COVID-19 RT-PCR test results have a consistent positive rate of ≤ 2% which also appears to be the likely false positive rate (FPR), rendering the number of official ‘cases’ virtually meaningless. The likely low virus prevalence (~0.02%) is consistent with as few as 1% of the 6,100+ Brits now testing positive each week in the wider community (pillar 2) tests actually having the disease.

We are now asked to believe that a random, probably asymptomatic member of the public is 5x more likely to test ‘positive’ than someone tested in hospital, which seems preposterous given that ~40% of diagnosed infections originated in hospitals.

The high amplification of PCR tests requires them to be subject to black box software algorithms, which the numbers suggest are preset at a 2% positive rate. If so, we will never get ‘cases’ down until and unless we reduce, or better yet cease altogether, randomized testing. Instead the government plans to ramp them up to 10m a day at a cost of £100bn, equivalent to the entire NHS budget.

Government interventions have seriously negative political, economic and health implications yet are entirely predicated on test results that are almost entirely false. Despite the prevalence of virus in the UK having fallen to about 2-in-10,000, the chances of testing ‘positive’ stubbornly remain ~100x higher than that.

First do no harm

It may surprise you to know that in medicine, a positive test result does not often, or even usually, mean that an asymptomatic patient has the disease. The lower the prevalence of a disease compared to the false positive rate (FPR) of the test, the more inaccurate the results of the test will be. Consequently, it is often advisable that random testing in the absence of corroborating symptoms, for certain types of cancer for example, is avoided and doubly so if the treatment has non-trivial negative side-effects. In Probabilistic Reasoning in Clinical Medicine (1982), edited by Nobel laureate Daniel Kahneman and his long-time collaborator Amos Tversky, David Eddy provided physicians with the following diagnostic puzzle. Women age 40, participate in routine screening for breast cancer which has a prevalence of 1%. The mammogram test has a false negative rate of 20% and a false positive rate of 10%. What is the probability that a woman with a positive test actually has breast cancer? The correct answer in this case is 7.5% but 95/100 doctors in the study gave answers in the range 70-80%, i.e. their estimates were out by an order of magnitude. [The solution: in each batch of 100,000 tests, 800 (80% of the 1,000 women with breast cancer) will be picked up; but so too will 9,920 (10% FPR) of the 99,200 healthy women. Therefore, the chance of actually being positive (800) if tested positive (800 + 9,920 = 10,720) is only 7.46% (800/10,720).]

Conditional probabilities

In the section on conditional probability in their new book Radical Uncertainty, Mervyn King and John Kay quote a similar study by psychologist Gerd Gigerenzer of the Max Planck Institute and author of Reckoning with Risk, who illustrated medical experts’ statistical innumeracy with the Haemoccult test for colorectal cancer, a disease with an incidence of 0.3%. The test had a false negative rate of 50% and a false positive rate of 3%. Gigerenzer and co-author Ulrich Hoffrage asked 48 experienced (average 14 years) doctors what the probability was that someone testing positive actually had colorectal cancer. The correct answer in this case is around 5%. However, about half the doctors estimated the probability at either 50% or 47%, i.e. the sensitivity (FNR) or the sensitivity less the specificity (FNR – FPR) respectively. [The solution: from 100,000 test subjects, the test would correctly identify only half of the 300 who had cancer but also falsely identify as positive 2,991 (3%) of the 99,700 healthy subjects. This time the chance of being positive if tested positive (150 + 2,991 = 3,141) is 4.78% (150/3,141).]As Gigerenzer concluded in a subsequent paper in 2003, “many doctors have trouble distinguishing between the sensitivity (FNR), the specificity (FPR), and the positive predictive value (probability that a positive test is a true positive) of test —three conditional probabilities.” Because doctors and patients alike are inclined to believe that almost all ‘positive’ tests indicate the presence of disease, Gigerenzer argues that randomised screening is far too poorly understood and too inaccurate in the case of low incidence diseases and can prove harmful where interventions have non-trivial, negative side-effects. Yet this straightforward lesson in medical statistics from the 1990s has been all but forgotten in the COVID-19 panic of 2020. Whilst false negatives might be the major concern if a disease is rife, when the incidence is low, as with the specific cancers above or COVID-19 PCR test, for example, the overriding problem is the false positive rate (FPR). There have been 17.6m cumulative RT-PCR (antigen) tests in the UK, 350k (2%) of which gave positive results. Westminster assumes this means the prevalence of COVID-19 is about 2% but that conclusion is predicated on the tests being 100% accurate which, as we will see below, is not the case at all.

Positives ≠ cases

One clue is that this 2% positive rate crops up worryingly consistently, even though the vast majority of those tested nowadays are not in hospital, unlike the early days. For example, from the 520k pillar 2 (community) tests in the fortnight around the end of May, there were 10.5k positives (2%), in the week ending June 24th there were 4k positives from 160k pillar 2 tests (2%) and last week about 6k of the 300k pillar 2 tests (2% again) were also ‘positive’. There are two big problems with this. First, medically speaking, a positive test result is not a ‘case’. A ‘case’ is by definition both symptomatic and must be diagnosed by a doctor but few of the pillar 2 positives report any symptoms at all and almost none are seen by doctors. Second, NHS diagnosis, hospital admission and death data have all declined consistently since the peak, by over 99% in the case of deaths, suggesting it is the ‘positive’ test data that have been corrupted. The challenge therefore is to deduce what proportion of the reported ‘positives’ actually have the disease (i.e. what is the FPR)? Bear in mind two things. First, the software that comes with the PCR testing machines states that these machines are not to be used for diagnostics (only screening). Second, the positive test rate can never be lower than the FPR.

Is UK prevalence now 0.02%?

The epidemiological rule-of-thumb for novel viruses is that medical cases can be assumed to be about 10x deaths and infections 10x cases. Note too that by medical cases what is meant is symptomatic hospitalisations not asymptomatic ‘positive’ RT-PCR test results. With no reported FPR to analyse and adjust reported test positives with, but with deaths now averaging 7 per day in the UK, we can backwardly estimate 70 daily symptomatic ‘cases’. This we can roughly corroborate with NHS diagnoses, which average 40 per day in England (let’s say 45 for the UK as a whole). The factor 10 rule-of-thumb therefore implies 450-700 new daily infections. UK government figures differ from the NHS and daily hospital admissions are now 84, after peaking in early April at 3,356 (-97.5%). Since the infection period lasts 22-23 days, the official death and diagnosis data indicate roughly 10-18k current active infections in the UK, 90% of whom feel just fine. Even the 2k daily pillar 1 (in hospital) tests only result in about 80 (0.4%) positives, 40 diagnoses and 20 admissions. Crucially, all these data are an order of magnitude lower than the positive test data and result in an inferred virus prevalence of 0.015%-0.025% (average 0.02%), which is far too low for randomized testing with anything less than a 100% perfect test; and the RT-PCR test is certainly less than 100% perfect.

Only 1% of ‘positives’ are positive

So, how do we reconcile an apparent prevalence of around 0.02% with a consistent positive PCR test rate of around 2%, which is some 100x higher? Because of the low prevalence of the disease, reported UK pillar 2 positives rate and the FPR are both about 2%, meaning almost all ‘positive’ test results are false with an overall error rate of 99:1 (99 errors for each correct answer). In other words, for each 100,000 people tested, we are picking up at least 24 of the 25 (98%) true positives but also falsely identifying 2,000 (2%) of the 99,975 healthy people as positives too. Not only do < 1.2% (24/2024) of pillar 2 ‘positives’ really have COVID-19, of which only 0.1% would be medically defined as symptomatic ‘cases’, but this 2% FPR rate also explains the ~2% (2.02% in this case) positive rate so consistently observed in the official UK data.

The priority now: FPR

This illustrates just how much the FPR matters and how seriously compromised the official data are without it. Carl Mayers, Technical Capability Leader at the Ministry of Defence Science and Technology Laborartory (Dstl) at Porton Down, is just one government scientist who is understandably worried about the undisclosed FPR. Mayers and his co-author Kate Baker submitted a paper at the start of June to the UK Government’s Scientific Advisory Group for Emergencies (SAGE) noting that the RT-PCR assays used for testing in the UK had been verified by Public Health England (PHE) “and show over 95% sensitivity and specificity” (i.e. a sub-5% false positive rate) in idealized laboratory conditions but that “we have been unable to find any data on the operational false positive rate” (their bold) and “this must be measured as a priority” (my bold). Yet SAGE minutes from the following day’s meeting reveal this paper was not even discussed.

False positives

According to Mayers, an establishment insider, PHE is aware the COVID-19 PCR test false positive rate (FPR) may be as high as 5%, even in idealized ‘analytical’ laboratory environments. Out in the real world though, ‘operational’ false positives are often at least twice as likely to occur: via contamination of equipment (poor manufacturing) or reagents (poor handling), during sampling (poor execution), ‘aerosolization’ during swab extraction (poor luck), cross-reaction with other genetic material during DNA amplification (poor design specification), and contamination of the DNA target (poor lab protocol), all of which are aggravating factors additional to any problems inherent in the analytic sensitivity of the test process itself, which is itself far less binary than the policymakers seem to believe. As if this wasn’t bad enough, over-amplification of viral samples (i.e. a cycle threshold ‘Ct’ > 30) causes old cases to test positive, at least 6 weeks after recovery when people are no longer infectious and the virus in their system is no longer remotely viable, leading Jason Leitch, Scotland’s National Clinical Director to call the current PCR test ‘a bit rubbish.’

Test…

The RT-PCR swab test looks for the existence of viral RNA in infected people. Reverse Transcription (RT) is where viral RNA is converted into DNA, which is then amplified (doubling each cycle) in a polymerase chain reaction (PCR). A primer is used to select the specific DNA and PCR works on the assumption that only the desired DNA will be duplicated and detected. Whilst each repeat cycle increases the likelihood of detecting viral DNA, it also increases the chances that broken bits of DNA, contaminating DNA or merely similar DNA may be duplicated as well, which increases the chances that any DNA match found is not from the Covid viral sequence.

…and repeat

Amplification makes it easier to discover virus DNA but too much amplification makes it too easy. In Europe the amplification, or ‘cycle threshold’ (Ct), is limited to 30Ct, i.e. doubling 30x (2 to the power of 30 = 1 billion copies). It has been known since April, that even apparently heavy viral load cases “with Ct above 33-34 using our RT-PCR system are not contagious and can thus be discharged from hospital care or strict confinement for non-hospitalized patients.” A review of 25 related papers by Carl Heneghan at the Centre for Evidence-Based Medicine (CEBM) also has concluded that any positive result above 30Ct is essentially non-viable even in lab cultures (i.e. in the absence of any functional immune system), let alone in humans. However, in the US, an amplification of 40Ct is common (1 trillion copies) and in the UK, COVID-19 RT-PCR tests are amplified by up to 42Ct. This is 2 to the power of 42 (i.e. 4.4 trillion copies), which is 4,400x the ‘safe’ screening limit. The higher the amplification, the more likely you are to get a ‘positive’ but the more likely it is that this positive will be false. True positives can be confirmed by genetic sequencing, for example at the Sanger Institute, but this check is not made, or at least if it is, the data is also unreported.

The sliding scale

Whatever else you may therefore have previously thought about the PCR COVID-19 test, it should be clear by now that it is far from either fully accurate, objective or binary. Positive results are not black or white but on a sliding scale of grey. This means labs are required to decide, somewhat subjectively, where to draw the line because ultimately, if you run enough cycles, every single sample would eventually turn positive due to amplification, viral breakdown and contamination. As Marianne Jakobsen of Odense University Hospital Denmark puts it on UgenTec’s website: “there is a real risk of errors if you simply accept cycler software calls at face value. You either need to add a time-consuming manual review step, or adopt intelligent software.”

Adjusting Ct test results

Most labs therefore run software to adjust positive results (i.e. decide the threshold) closer to some sort of ‘expected’ rate. However, as we have painfully discovered with Prof. Neil Ferguson’s spectacularly inaccurate epidemiological model (expected UK deaths 510,000; actual deaths 41,537) if the model disagrees with reality, some modelers prefer to adjust reality not their model. Software programming companies are no exception and one of them, diagnostics.ai, is taking another one UgenTec (which won the no-contest bid for setting and interpreting the Lighthouse Labs thresholds), to the High Court on September 23rd apparently claiming UgenTec had no track record, external quality assurance (EQA) or experience in this field. Whilst this case may prove no more than sour grapes on diagnostics.ai’s part, it does show that PCR test result interpretation, whether done by human or computer, is ultimately not only subjective but as such will always effectively bury the FPR.

Increase tests, increase ‘cases’

So, is it the software that is setting the UK positive case rate ≤ 2%? Because if it is, we will never get the positive rate below 2% until we cease testing asymptomatics. Last week (ending August 26th) there were just over 6,122 positives from 316,909 pillar 2 tests (1.93%), as with the week of July 22nd (1.9%). Pillar 2 tests deliver a (suspiciously) stable proportion of positive results, consistently averaging ≤ 2%. As Carl Heneghan at the CEBM in Oxford has explained, the increase in absolute number of pillar 2 positives is nothing more than a function of increased testing, not increased disease as erroneously reported in the media. Heneghan shows that whilst pillar 1 cases per 100,000 tests have been steadily declining for months, pillar 2 cases per 100,000 tests are “flatlining” (at around 2%).

30,000 under house arrest

In the week ending August 26th, there were 1.45m tests processed in the UK across all 4 pillars, though there seem to be no published results for the 1m of these tests that were pillar 3 (antibody tests) or pillar 4 “national surveillance” tests (NB. none of the UK numbers ever seem to match up). But as far as pillar 1 (hospital) cases are concerned, these have fallen by about 90% since the start of June, so almost all positive cases now reported in the UK (> 92% of the total) come from the largely asymptomatic pillar 2 tests in the wider community. Whilst pillar 2 tests were originally intended to be only for the symptomatic (doctor referral etc) the facilities have been swamped with asymptomatics wanting testing, and their numbers are only increasing (+25% over the last two weeks alone) perhaps because there are now very few symptomatics out there. The proportion of pillar 2 tests that that are taken by asymptomatics is yet another figure that is not published but there are 320k pillar 2 tests per week, whilst the weekly rate of COVID-19 diagnoses by NHS England is just 280. Assume that Brits are total hypochrondriacs and only 1% of those reporting respiratory symptoms to their doctor (who sends them out to get a pillar 2 test) end up diagnosed, that still means well over 90% of all pillar 2 tests are taken by the asymptomatic; and asymptomatics taking PCR tests when the FPR is higher than the prevalence (100x higher in this instance) results in a meaningless FPR (of 99% in this instance).Believing six impossible things before breakfast

Whilst the positive rate for pillar 2 is consistently ~2% (with that suspiciously low degree of variability), it is more than possible that the raw data FPR is 5-10% (consistent with the numbers that Carl Mayers referred to) and the only reason we don’t see such high numbers is that the software is adjusting the positive threshold back down to 2%. However, if that is the case, no matter what the true prevalence of the disease, the positive count will always and forever be stuck at ~2% of the number of tests. The only way to ‘eradicate’ COVID-19 in that case would be to cease randomized testing altogether, which Gerd Gigerenzer might tell you wouldn’t be a bad idea at all. Instead, lamentably, the UK government is reportedly doubling down with its ill-informed ‘Operation Moonshot’, an epically misguided plan to increase testing to 10m/day, which would obviously mean almost exclusively asymptomatics, and which we can therefore confidently expect to generate an apparent surge in positive ‘cases’ to 200,000 a day, equivalent to the FPR and proportionate to the increase in the number of tests.

Emperor’s new clothes

Interestingly, though not in a good way, the positive rate seems to differ markedly depending on whether we are talking about pillar 1 tests (mainly NHS labs) or pillar 2 tests, mainly managed by Deloitte (weird but true) which gave the software contract to UgenTec and which between them set the ~2% positive thresholds for the Lighthouse Lab network. This has had the quirky result that a gullible British public is now expected to believe that people in hospital are 4-5x less likely to test positive (0.45%) than fairly randomly selected, largely asymptomatic members of the general public (~2%), despite 40% of transmissions being nosocomial (at hospital). The positive rate, it seems, is not just suspiciously stable but subject to worrying lab-by-lab idiosyncrasies pre-set by management consultants, not doctors. It is little wonder no one is willing to reveal what the FPR is, since there’s a good chance nobody really knows any longer; but that is absolutely no excuse for implying it is zero.

Wave Two or wave goodbye?

The implications of the overt discrepancy between the trajectories of UK positive tests (up) and diagnoses, hospital admissions and deaths (all down) need to be explained. Positives bottomed below 550 per day on July 8th and have since gone up by a factor of three to 1500+ per day. Yet over the same period (shifted forward 12 days to reflect the lag between hospitalisation and death), daily deaths have dropped, also by a factor of three, from 22 to 7, as indeed have admissions, from 62 to 20 (compare the right-hand side of the upper and lower panels in the Chart below). Much more likely, positive but asymptomatic tests are false positives. The Vivaldi 1 study of all UK care home residents found that 81% of positives were asymptomatic, which for this most vulnerable cohort, probably means false positive.

Chart: UK daily & 7-day COVID-19 cases (top) and deaths (below)

This almost tenfold discrepancy between positive test results and the true incidence of the disease also shows up in the NHS data for 9th August (the most recent available), showing daily diagnoses (40) and hospital admissions (33) in England that are way below the Gov.UK positive ‘cases’ (1,351) and admissions (53) data for the same day. Wards are empty and admissions are so low that I know of at least one hospital (Taunton in Devon), for example, which discharged its last COVID-19 patient three weeks ago and hasn’t had a single admission since. Thus the most likely reason < 3% (40/1351) of positive ‘cases’ are confirmed by diagnosis is the ~2% FPR. Hence the FPR needs to be expressly reported and incorporated into an explicit adjustment of the positive data before even more harm is done.

Occam’s Razor

Oxford University’s Sunetra Gupta believes it is entirely possible that the effective herd immunity threshold (HIT) has already been reached, especially given that there hasn’t been a genuine second wave anywhere. The only measure suggesting higher prevalence than 0.025% is the positive test rate but this data is corrupted by the FPR. The very low prevalence of the disease means that the most rational explanation for almost all the positives (2%), at least in the wider community, is the 2% FPR. This benign conclusion is further supported by the ‘case’ fatality rate (CFR), which has declined 40-fold: from 19% of all ‘cases’ at the mid-April peak to just 0.45% of all ‘positives’ now. The official line is that we are getting better at treating the disease and/or it is only healthy young people getting it now; but surely the far simpler explanation is the mathematically supported one that we are wrongly assuming, against all the evidence, that the PCR test results are 100% accurate.

Fear and confusion

Deaths and hospitalizations have always provided a far truer, and harder to misrepresent, profile of the progress of the disease. Happily, hospital wards are empty and deaths had already all but disappeared off the bottom of the chart (lower panel, in the chart above) as long ago as mid/late July; implying the infection was all but gone as long ago as mid-June. So, why are UK businesses still facing restrictions and enduring localized lockdowns and 10pm curfews (Glasgow, Bury, Bolton and Caerphilly)? Why are Brits forced to wear masks, subjected to traveler quarantines and, if randomly tested positive, forced into self-isolation along with their friends and families? Why has the UK government listened to the histrionics of discredited self-publicists like Neil Ferguson (who vaingloriously and quite sickeningly claims to have ‘saved’ 3.1m lives) rather than the calm, quiet and sage interpretations offered by Oxford University’s Sunetra Gupta, Cambridge University’s Sir David Spiegelhalter, the CEBM’s Carl Heneghan or Porton Down’s Carl Mayers? Let’s be clear: it certainly has nothing to do with ‘the science’ (if by science we mean ‘math’); but it has a lot to do with a generally poor grasp of statistics in Westminster; and even more to do with political interference and overreach.

Bad Math II

As an important aside, it appears that the whole global lockdown fiasco might have been caused by another elementary mathematical mistake from the start. The case fatality rate (CFR) is not to be confused with the infection fatality rate (IFR), which is usually 10x smaller. This is epidemiology 101. The epidemiological rule-of-thumb mentioned above is that (mild and therefore unreported) infections can be initially assumed to be approximately 10x cases (hospital admissions) which are in turn about 10x deaths. The initial WHO and CDC guidance following Wuhan back in February was that COVID-19 could be expected to have the same 0.1% CFR as flu. The mistake was that 0.1% was flu’s IFR, not its CFR. Somehow, within days, Congress was then informed on March 11th that the estimated mortality for the novel coronavirus was 10x that of flu and days after that, the lockdowns started.

Neil Ferguson: Covid’s Matthew Hopkins

This slip-of-the-tongue error was, naturally enough, copied, compounded and legitimized by the notorious Prof. Neil Ferguson, who referenced a paper from March 13th he had co-authored with Verity et al. which took “the CFR in China of 1.38% (to) obtain an overall IFR estimate for China of 0.66%”. Not three days later his ICL team’s infamous March 16th paper further bumped up “the IFR estimates from Verity et al… to account for a non-uniform attack rate giving an overall IFR of 0.9%.” Just like magic, the IFR implied by his own CFR estimate of 1.38% had, without cause, justification or excuse, risen 6.5-fold from his peers’ rule-of-thumb of 0.14% to 0.9%, which incidentally meant his mortality forecast would also be similarly multiplied. Not satisfied with that, he then exaggerated terminal herd immunity.

Compounding errors

Because Ferguson’s model simplistically assumed no natural immunity (there is) and that all socialization is homogenous (it isn’t), his model doesn’t anticipate herd immunity until 81% of the population has been infected. All the evidence since as far back as February and the Diamond Princess indicated that effective herd immunity is occurring around a 20-25% infection rate; but the modelers have still not updated their models to any of the real world data yet and I don’t suppose they ever will. This is also why these models continue to report an R of ≥ 1.0 (growth) when the data, at least on hospital admissions and deaths, suggest the R has been 0.3-0.6 (steadily declining) since March. Compound all these errors and Ferguson’s expected UK death toll of 510k has proved to be 12x too high. His forecast of 2.2m US deaths has also, thankfully but no thanks to him, been 11x too high too. The residual problem is that the politicians still believe this is merely Armageddon postponed, not Armageddon averted. “A coward dies a thousand times before his death, but the valiant taste of death but once” (Shakespeare).

Quality control

It is wholly standard to insist on external quality assurance (EQA) for any test but none such has been provided here. Indeed all information is held back on a need-to-know rather than a free society basis. The UK carried out 1.45m tests last week but published the results for only 452k of them. No pillar 3 (antibody) test results have been published at all, which begs the question: why not (official reason – the data has been anonymized, as if that makes any sense)? The problem is that instead of addressing the FPR, the authorities act as if it is zero, and so assume relatively high virus prevalence. If however, the 2% positive rate is merely a reflection of the FPR, a likely explanation for why pillar 3 results remain unpublished might be that they counterintuitively show a decline in antibody positives. Yet this is only to be expected if the prevalence is both very low and declining. T-cells retain the information to make antibodies but if there is no call for them because people are no longer coming into contact with infections, antibodies present in the blood stream decline. Why there are no published data on pillar 4 (‘national surveillance’ PCR tests remains a mystery).

It’s not difficult

However, it is relatively straightforward to resolve the FPR issue. The Sanger Institute is gene sequencing positive results but will fail to achieve this with any false positives, so publishing the proportion of failed sequencing samples would go a long way to answering the FPR question. Alternatively, we could subject positive PCR tests to a protein test for confirmation. Lab contaminated and/or previously-infected-now-recovered samples would not be able to generate these proteins like a live virus would, so once again, the proportion of positive tests absent protein would give us a reliable indication of the FPR.

Scared to death

The National Bureau of Economic Research (NBER) has filtered four facts from the international COVID-19 experience and these are: that the growth in daily deaths declines to zero within 25-30 days, that they then decline, that this profile is ubiquitous and so much so that governmental non-pharmaceutical interventions (NPIs) made little or no difference. The UK government needs to understand that neither assuming that ‘cases’ are growing, without at least first discounting the possibility that what is observed is merely a property of the FPR, nor ordering anti-liberal NPIs, is in any way ‘following the science’. Even a quite simple understanding of statistics indicates that positive test results must be parsed through the filter of the relevant FPR. Fortunately, we can estimate the FPR from what little raw data the government has given us but worryingly, this estimate suggests that ~99% of all positive tests are ‘false’. Meanwhile, increased deaths from drug and alcohol abuse during lockdowns, the inevitable increase in cases of depression and suicide once job losses after furlough, business and marriage failures post loan forbearance become manifest and, most seriously, the missed cancer diagnoses from the 2.1m screenings that have been delayed must be balanced against a government response to COVID-19 that looks increasingly out of all proportion to the hard evidence. The unacknowledged FPR is taking lives, so establishing the FPR, and therefore accurate numbers for the true community prevalence of the virus, is absolutely essential.

James Ferguson is the Founding Partner of MacroStrategy

 

 

The Unreliable Co#$% Test

A recent New York Times article described how at least 90% of Covid-19 tests are producing false positive results because the tests exceed the recommended threshold value (or number of cycles). Most labs are running the test to 40+ cycles which is not recommended because the results are invalid at that high a level of amplification. They basically begin detecting any protein or RNA and will show as a positive that can’t be trusted.

The North Carolina State Lab of Public Health uses two methods – The CDC 2019 nCov Real Time PCR test and the TaqPath Covid-19 Combo Kit.

The CDC kit was released shortly after the supposed US outbreaks of Covid-19 were confirmed. Many state labs are likely using this kit. The Ct threshold is 40, thus 90% of the tests are likely false positive.

Researcher, David Crowe described the significance of PCR cycle thresholds:

The PCR algorithm is cyclical. At each cycle it generates approximately double the amount of DNA (which, in RT-PCR, corresponding to the RNA that the process started with). When used as a test you don’t know the amount of starting material, but the amount of DNA at the end of each cycle will be shown indirectly by fluorescent molecules that are attached to the probes. The amount of light produced after every step will then approximately double, and when it reaches a certain intensity the process is halted and the sample is declared positive (implying infected). If, after a certain number of cycles, there is still not sufficient DNA, then the sample is declared negative (implying not infected). This cycle number (Ct) used to separate positive from negative is arbitrary, and is not the same for every organization doing testing. For example, there is a paper published that reported using 36 as the cutoff for positive, 37-39 as indeterminate, requiring more testing, and above 39 as negative. Another paper used 37 as the cutoff, with no intermediate zone. In a list of test kits approved by the US FDA one manufacturer each recommended 30 cycles, 31, 35, 36, 37, 38 and 39. 40 cycles was most popular, chosen by 12 manufacturers, and one each recommended 43 and 45.

Implicit in using a Ct number is the assumption that approximately the same amount of original RNA (within a multiple of two) will produce the same Ct number. However, there are many possibilities for error in RT-PCR. There are inefficiencies in extracting the RNA, even larger inefficiencies in converting the RNA to complementary DNA (Bustin noted that efficiency is rarely over 50% and can easily vary by a factor of 10), and inefficiencies in the PCR process itself. Bustin, in the podcast, described reliance on an arbitrary Ct number as “absolute nonsense, it makes no sense whatsoever”. It certainly cannot be assumed that the same Ct number on tests done at different laboratories indicates the same original quantity of RNA.

Professor Bustin stated that cycling more than 35 times was unwise, but it appears that nobody is limiting cycles to 35 or less. Cycling too much could result in false positives as background fluorescence builds up in the PCR reaction.

Clearly, the lack of uniformity in tests and the high cycle thresholds makes testing extremely unreliable, so the number of fatalities from Covid-19 can’t be trusted, and the running tally of cases does not have any correlation to viral spread whatsoever.

It’s likely this is occurring everywhere in the world as PCR is the only mode of testing being used. None of the tests are FDA approved. They have temporary approval based on Emergency Use Authorization. The FDA continues to add additional test kits and reagents every day despite the fact that we are not in a state of emergency. It’s time for the FDA to set reliable standards on these tests.

Also, consider the serious consequences of treating people who may have received a false positive for Covid-19 disease. This is dangerous. And consider that we are unnecessarily quarantining people with false positives, then contact tracing and it is all based on a meaningless test.

The CDC recently stated that asymptomatic people no longer need to be tested, likely due to the exceeded cycle threshold. If asymptomatics are not a threat, no one needs to wear a mask. This all needs to stop now.

Is Co#@$ Disease Real?

Many of us have been poring through data to understand the justifications for continued lockdowns. Governors and Health Directors in many states have used sketchy metrics to determine when businesses can open, why masks must be worn, distancing, and many other arbitrary draconian measures all in the name of health. They claim they are protecting us from a deadly virus known as SARS-COV-2, but a recent article titled Covid 19 Tests are Scientifically Meaningless by German journalist, Torsten Engelbrecht calls the entire pandemic into question.

The article is very well written, documented, and sourced, and I’ve shared it as much as possible but it desperately needs to go viral. I believe it’s not receiving the attention it deserves due it’s technical nature, so I am simplifying it in the hopes that more people can grasp the significance of what appears to be fraud and deception.

PCR (Polymerase Chain Reaction) tests are used to diagnose Covid-19, but they were not designed for that purpose. The World Health Organization (WHO) accepted it for this purpose but never validated it. The function of PCR tests is to replicate DNA or RNA billions of times so a specimen can be an adequate size for scientific examination. It is not supposed to be a disease diagnostic tool.

In fact, in 2007 medical experts believed Whooping Cough was going around and it was initially confirmed via a PCR test. This impacted thousands of people and hospitals as treatments, isolation and vaccinations were administered. It was finally sent to the CDC for verification of the disease and it turned out that it was a false epidemic. I believe Covid-19 is also a false epidemic.

In determining the optimum method to diagnose a specific virus or pathogen, one must first have a gold standard that verifies the very existence of the virus. Distinctive symptoms may be used as that standard, but SARS-COV-2 doesn’t have distinctive, specific symptoms to the exclusion of other illnesses, so that’s out. The standard protocol to verify the existence of a new virus is to isolate and purify it from an ill patient and then examine it with an electron microscope. Once purified, it can be used to verify that others with a similar illness do in fact have the same disease. That can lead to monitoring of the spread.

Once the pathogen is isolated and purified, the gene sequences can be used to calibrate a PCR test kit which ensures that the kit is detecting the exact same virus from which the kit was calibrated. The problem is, no one has ever properly isolated and purified the alleged SARS-COV-2 virus. That is troublesome because it means we don’t know what was used to calibrate the PCR test kits, but it doesn’t appear to have been SARS-COV-2.

The reason for this is that PCR is extremely sensitive, which means it can detect even the smallest pieces of DNA or RNA — but it cannot determine where these particles came from. That has to be determined beforehand.

And because the PCR tests are calibrated for gene sequences (in this case RNA sequences because SARS-CoV-2 is believed to be a RNA virus), we have to know that these gene snippets are part of the looked-for virus. And to know that, correct isolation and purification of the presumed virus has to be executed.

As it stands, we have no proof that the RNA detected in all of the millions of Covid-19 tests matches anything of viral origin! Engelbrecht went to great lengths to question all of the researchers and has yet to find anyone who purified the collected cellular debris.

Study 1: Leo L. M. Poon; Malik Peiris. “Emergence of a novel human coronavirus threatening human health” Nature Medicine, March 2020
Replying Author: Malik Peiris
Date: May 12, 2020
Answer:“The image is the virus budding from an infected cell. It is not purified virus.”

Study 2: Myung-Guk Han et al. “Identification of Coronavirus Isolated from a Patient in Korea with COVID-19”, Osong Public Health and Research Perspectives, February 2020
Replying Author: Myung-Guk Han
Date: May 6, 2020
Answer:“We could not estimate the degree of purification because we do not purify and concentrate the virus cultured in cells.”

Study 3: Wan Beom Park et al. “Virus Isolation from the First Patient with SARS-CoV-2 in Korea”, Journal of Korean Medical Science, February 24, 2020
Replying Author: Wan Beom Park
Date: March 19, 2020
Answer: “We did not obtain an electron micrograph showing the degree of purification.”

Study 4: Na Zhu et al., “A Novel Coronavirus from Patients with Pneumonia in China”, 2019, New England Journal of Medicine, February 20, 2020
Replying Author: Wenjie Tan
Date: March 18, 2020
Answer:“[We show] an image of sedimented virus particles, not purified ones.”

This begs the question —What was used to create the PCR tests if the virus was never properly isolated? And it totally discredits the results of all Covid-19 tests.

The Koch postulate qualifies a pathogen as causing a disease, ie. SARS-COV-2 allegedly causes Covid-19 disease.

However, the postulate has not been met for this so called virus. So we’ve trashed the global economy for something that does not even meet the criteria of a disease-causing pathogen. I believe it’s possible that the PCR tests being used are picking up RNA from many various infections that go around every year and there’s been a trend for the past several years of serious pneumonia outbreaks in nursing homes, so it’s not surprising that we could get positive RNA tests correlated to deaths but with no direct evidence that the cause of death is from SARS-COV-2.

It also means people are being treated improperly for a virus that may not even exist. Perhaps a large percentage of us would test positive with these RNA tests every year. Anecdotally, many have reported that they don’t know anyone suffering from illnesses, and aside from the media and government official’s hysterics over cases numbers, there’s no evidence of a highly contagious illness in our environment.

There have also been reports of false positive Covid-19 tests that are later determined to be negative and sometimes positive again! Testing! Testing! Testing! That’s what all the health experts advised and consider that they may be simply measuring every random viral illness, colds, and pneumonia that happen to contain RNA, lumped them all together and renamed it Covid-19. That would explain why it’s no more severe than the flu and why incidences are on par with past flu numbers, although slightly higher but that would be due to all of the additional random testing.

Does this explain why health directors never share their data? Is this also why it’s been confirmed that many deaths were labeled as Covid-19 despite no confirmatory tests? There is also the fact that in many areas such as Italy, autopsies confirmed that the cause of death was from other factors —not from Covid-19.

Consider that billions of dollars of vaccines and other medications may be in development for a fake disease and will be administered to as many people as possible. I honestly don’t believe that labs have the ability to create a virus and spread it through a population. And I don’t believe the bat story either.

Another consideration is that Event 201 took place approximately one month before the alleged SARS-COV-2 outbreak. Event 201 was a simulated exercise of a Coronavirus pandemic. What are the odds of that timing? I think many of us instinctively know that things don’t add up and that this so called virus is being used to push through a global agenda, but proving it has been difficult. Maybe Engelbrecht found the proof and we need to demand answers. Where is the proof of the isolated SARS-COV-2 that causes Covid-19 and proof that it meets the Koch postulate?

Please take the time to thoroughly read the entire article linked above which goes into more detail about error rates that we may be experiencing. This is potentially the largest psychological operation ever perpetrated in history. We are being threatened daily over face masks and loss of freedom. Many have lost their jobs. Many have died alone. Many have died of suicide and drug overdoses. We deserve to have our freedom and our lives back.

 

 

 

If the Evidence is Unfit, You Must Acquit: Prosecutors are fighting to keep flawed forensic evidence in the courtroom

I’m sharing this important article about the reliability of forensic evidence. Link

Much of the forensic evidence used in convictions has been found unreliable. Prosecutors want to use it anyway

by Daniel Denvir

Under fire yet again, law enforcement is fighting back. Facing heavy criticism for misconduct and abuse, prosecutors are protesting a new report from President Obama’s top scientific advisors that documents what has long been plain to see: much of the forensic evidence used to win convictions, including complex DNA samples and bite mark analysis, is not backed up by credible scientific research.

Although the research is clear, many in law enforcement seem terrified that keeping pseudoscience out of prosecutions will make them unwinnable. Attorney General Loretta Lynch declined to accept the report’s recommendations on the admissibility of evidence and the FBI accused the advisors of making “broad, unsupported assertions.” But the National District Attorneys Association, which represents roughly 2,5000 top prosecutors nationwide, went the furthest, taking it upon itself to, in its own words, “slam” the report.

Prosecutors actual problem with the report, produced by some of the nation’s leading scientists on the President’s Council of Advisors on Science and Technology, seems to be unrelated to science. Reached by phone NDAA president-elect Michael O. Freeman could not point to any specific problem with the research and accused the scientists of having an agenda against law enforcement.

“I’m a prosecutor and not a scientist,” Freeman, the County Attorney in Hennepin County, Minnesota, which encompasses Minneapolis, told Salon. “We think that there’s particular bias that exists in the folks who worked on this, and they were being highly critical of the forensic disciplines that we use in investigating and prosecuting cases.”

That response, devoid of any reference to hard science, has prompted some mockery, including from Robert Smith, Senior Research Fellow and Director of the Fair Punishment Project at Harvard Law School, who accused the NDAA of “fighting to turn America’s prosecutors into the Anti-Vaxxers, the Phrenologists, the Earth-Is-Flat Evangelists of the criminal justice world.”

It has also, however, also lent credence to a longstanding criticism that American prosecutors are more concerned with winning than in establishing a defendant’s guilt beyond a reasonable doubt.

“Prosecutors should not be concerned principally with convictions; they should be concerned with justice,” said Daniel S. Medwed, author of “Prosecution Complex: America’s Race to Convict and Its Impact on the Innocent” and a professor at Northern University School of Law, told Salon. “Using dodgy science to obtain convictions does not advance justice.”

In its press release, the NDAA charged that the scientists, led by Human Genome Project leader Eric Lander, lack necessary “qualifications” and relied “on unreliable and discredited research.” Freeman, asked whether it the NDAA was attempting to discredit scientific research without having scientists evaluate that research, demurred.

“I appreciate your question and I can’t respond to that,” he said.

Similarly, Freedman was unable to specify any particular reason that a member of the council might be biased against prosecutors.

“We think that this group of so-called experts had an agenda,” he said, “which was to discredit a lot of the science…used by prosecutors.”

The report, “Science in Criminal Courts: Ensuring Scientific Validity of Feature-Comparison Methods,” was the result of a comprehensive review or more than 2,000 papers and produced in consultation with a bevvy of boldfaced names from the legal community. It found that there is no solid scientific basis to support the analyses of bite marks, firearms, biological samples containing the DNA of multiple individuals and footwear. The report also found that the certainty of latent fingerprint analysis is often overstated, and it criticized proposed Justice Department guidelines defending the validity of hair analysis as being grounded in “studies that do not establish [its] foundational validity and reliability.”

The new report is comprehensive but hardly the first time that scientific research has cast doubt on the reliability of evidence used in trials — everything from eyewitness identification to arson investigations. The report cites a 2002 FBI reexamination of their own scientists’ microscopic hair comparisons and found that DNA testing showed 11 percent of the samples that had been found to match in reality came from different people. A 2004 National Research Council report cited found there was an insufficient basis upon which to draw “a definitive connection between two bullets based on compositional similarity of the lead they contain.”

One of the most important developments in recent decades has been DNA science, which has not only proven that defendants have been wrongfully convicted but also raised questions about the forensic evidence used to win those convictions.

In the Washington Post, University of Virginia law professor Brandon L. Garrett describes the case of Keith Harward, who was exonerated on April 8 for a Newport News, Virginia rape and murder that DNA evidence later showed someone else committed. His conviction, for which he spent 33 years behind bars, hinged on the false testimony of two purported experts who stated that his teeth matched bite marks on the victim’s body.

“Of the first 330 people exonerated by DNA testing, 71 percent, or 235 cases, involved forensic analysis or testimony,” Garrett writes. “DNA set these people free, but at the time of their convictions, the bulk of the forensics was flawed.”

In an interview, Garrett called the NDAA response “juvenile.”

“The response seems to be you say that certain forensic sciences are unscientific, well you’re unscientific,” said Garrett. “To call a group of the leading scientists in the world unscientific, it’s just embarrassing….I really doubt that they speak for most prosecutors.”

Many cases, the report found, have “relied in part on faulty expert testimony from forensic scientists who had told juries incorrectly that similar features in a pair of samples taken from a suspect and from a crime scene (hair, bullets, bitemarks, tire or shoe treads, or other items) implicated defendants in a crime with a high degree of certainty.”

Expert witnesses have often overstated the certainty of their findings, declaring that they were 100-percent certain when in fact 100-percent certainty is scientifically impossible.

Forensic science has largely been developed within law enforcement and not by independent scientists, said Medwed. In the case of bite mark analysis, the report concludes that the method is basically worthless. But by and large, the report calls not for the science to be thrown out forever but to be improved so that it is in fact reliable.

“The NDAA response strikes me as a bit defensive to say the least and puzzling because my hope is that in looking at this report the reaction of prosecutors would be, how do we improve the system,” said Medwed. “Even if they believe that some of these disciplines are legitimate, how do we further test them, and refine them so they can be better?”

The NDAA, however, not only dismisses the scientific research in question but asserts that scientific expertise has no role to play in determining what kind of evidence judges decide to admit into court. They accuse the council of attempting “to usurp the Constitutional role of the Courts” by “insert[ing] itself as the final arbiter of the reliability and admissibility of the information generated through…forensic science disciplines.”

The council acknowledges that judges make these decisions. But since judges are typically not scientists they must make them under the guidance of scientific expertise.

“Judges’ decisions about the admissibility of scientific evidence rest solely on legal standards; they are exclusively the province of the courts and PCAST does not opine on them,” the report states. “But, these decisions require making determinations about scientific validity. It is the proper province of the scientific community to provide guidance concerning scientific standards for scientific validity.”

When prosecutors use scientific evidence to prosecute a defendant, that evidence should be scientifically valid. It’s clear, however, that it is often bogus or unreliable. There is a growing consensus that the United States locks up far too many people, and it’s increasingly clear that an untold number of those people haven’t even committed a crime. Many prosecutors might be unwilling to solve this problem. Voters, who have recently tossed out incumbents in Chicago, Cleveland and Jacksonville, might have to take the lead, said Garrett.

“The American public today doesn’t want prosecutors to win at all costs,” said Garrett. “They don’t want prosecutors using fake evidence.”

Link to article: http://www.salon.com/2016/09/23/if-the-evidence-is-unfit-you-must-acquit-prosecutors-are-fighting-to-keep-flawed-forensic-evidence-in-the-courtroom/

 

 

%d bloggers like this: