Errors, Corrections, & Data

Corrections

We have a strong track record of correcting mistakes in a very public, visible way. There is a cascading degree of severity with errors. They are described below. Regardless of how severe the error is, the first steps always involve the following:

  1. We re-run and validate the data point (or if it is simple fact, like we got the cache amount wrong, we verify against the fact)

  2. If it is erroneous, we immediately rectify it, identify whether it was machine or human error, and then identify a way to avoid it ever happening again. We often share this change publicly.

  3. We pin a top comment with the corrected information with high visibility on the video.

  4. We update the video description with similar or the same information as the pin.

  5. We tweet the correction.

  6. We update the TIMESTAMPS section to state “ERROR” or “CORRECTION” to make all viewers aware during viewing

  7. We often also post the correction to YT Community (if we feel most people have already seen the video)

  8. In extreme cases, we may make a note in the next HW News about what we’ve changed to avoid that mistake in the future.

  9. The worst case scenario is that we completely pull a video if we feel it puts substantial wrong information out there.

But there are some caveats, here: Just like manufacturers don’t control our opinions, neither does the audience. After all, that’s why people like us: We say what we believe, and often with a lot of conviction. That means we may sometimes be in disagreement with some of the audience. We will always re-evaluate to make sure we are grounded in reality in cases of stark disagreement, but if our findings support that our opinion is the more likely to be correct, we will stand by it.

Errors

We already have an extensive QC pipeline that involves the following checks:

  1. Writer QCs Technician results

  2. Technician QCs Writer interpretation of data

  3. Steve QCs writing (unless writing it himself)

  4. Editor compiles the video and passively looks for further errors

  5. Technician and Writer QC video

  6. Steve QCs video

Even with all these steps, because we are often talking about complex systems with often limited time leading into reviews, we still occasionally make errors. Sometimes they are machine errors. We refuse to grow the team beyond a certain size because it becomes too distant from the core function and too focused centrally on money, which means we sometimes don’t have the stamina to catch every small mistake as we continually bulk-up testing and quality. In those instances, we follow the steps on the right.

HOWEVER:

Not everything the audience perceives as an “error” is one. A lot of times, people just don’t really understand that tests can’t be compared a certain way. Not long ago, someone emailed us to inform us that our Shadow of the Tomb Raider results were “wrong” because a new chart’s data didn’t match the data from last time. When we checked, we saw they were referring to a chart with Ray Tracing features on compared against a chart with RT features off. In other words, they’re completely incomparable and not being the same is expected.

So we don’t want people to get into the habit of thinking any random poster on reddit with the right vocabulary is accurate in identifying errors. We had someone else trying to compare absolute numbers of FFXIV’s bench between their system and ours, even though we conduct the test completely differently, in different areas, and with different capture methods. So just be advised - a lot of the posts are wrong. But when we see them, the first thing we do is look into their accuracy.

In other words: It is not an “error” to have an unpopular opinion, and as such, no change will be made to the content for an unpopular opinion if a second look reinforces it. The errors process is reserved for errors only.

We just want to make sure everyone’s expectations are aligned.


Cascading Severity

There are cascading degrees of severity for an error in our content. They are described below, along with their responses:

Response

  1. We might post a pinned comment or update the description to notate this error. If it’s small, like a typo where it says “NVDIIA” instead of “NVIDIA” or something, we may not even acknowledge it as it just isn’t necessary.

  2. We will follow the above laid-out guidelines and make the correction via tweet, pinned comment, and description. If a chart is wrong, we will also post on YT Community. We will provide the new (correct) numbers in those places, but cannot replace the video in-place. We will update the video TIMESTAMPS section to read “CORRECTION” or “ERROR” before that chart so that everyone is aware while watching.

  3. We will follow the steps laid-out in #2 above and will also evaluate whether this affects the video conclusion in a more serious way. If the affected data/charts are not part of our conclusion for the product’s value, we will provide a written update/statement with any nuance. If there is a minor/caveated impact, we’ll pin a comment. If they have significant impact, see below.

  4. We investigate first, verify that we’re right in our understanding of high impact, and then immediately pull the video. We then broadcast as loud as we can what was wrong and why we’re changing it, including posts to all above channels/comms as well as a new video with the fix. We have only had to do this a few times, but it has always been worth the certainty that what we’re putting out is right and good.

Type of (Verified) Error

  1. No impact to content

    • Example: Typo

  2. Low impact to content

    • Example: Wrong statement on a product spec, such as cache, but no influence on the data or tests; in this scenario, all data remains in-tact and the conclusion is therefore unaffected.

    • Example: One chart has incorrect results due to test error or machine error.

  3. Moderate impact to content

    • Example: More than one chart has an incorrect result

  4. High impact to content

    • Example: The wrong settings were used for one set of tests applicable to the DUT or conclusion

    • Example: Several charts contain invalid data due to technician or machine error applicable to the DUT or conclusion


Notes from Steve:

Some final thoughts: We find it weak when people say “mistakes happen.” Yes. They do, but that’s a passive, convenient platitude and deflects any ownership. Our approach is more along the lines of “yeah, we screwed that up big time — we fixed it and here’s how we’re preventing it going forward.”

There were times that I personally didn’t take this approach, or that it took me too long to realize that we were in the wrong. One example is our old Ryzen memory research — we had the right idea, but poor execution and we didn’t spend enough time on the piece. We should have re-done it at the time. We have grown and learned from those over the years. Oddly, the more experience I’ve gained in this field, the more I’ve realized how little we all know. You could spend a career reviewing every aspect of one CPU. It has taken me time to come around to owning our mistakes as we make them, and although we can always continue to improve, I feel our responses outlined above have us in a spot that ensures the most people are aware of a correction when one is needed. Likewise, when one isn’t needed and a community member calls for a retest, at least we can say “yeah, we retested it and it was the same.” It gives a certain freeing certainty. This is an area where I have experienced a lot of personal growth over the years, yet simultaneously identify a need for continual and further growth here.

Previous
Previous

Travel, Hotels, & Events

Next
Next

Advertising Policies