free

joined 11 months ago
MODERATOR OF
 

Magnetic Core memory was the RAM at the heart of many computer systems through the 1970s, and is undergoing something of a resurgence today since it is easiest form of memory for an enterprising hacker to DIY. [Han] has an excellent writeup that goes deep in the best-practices of how to wire up core memory, that pairs with his 512-bit MagneticCoreMemoryController on GitHub.

Magnetic core memory works by storing data inside the magnetic flux of a ferrite ‘core’. Magnetize it in one direction, you have a 1; the other is a 0. Sensing is current-based, and erases the existing value, requiring a read-rewrite circuit. You want the gory details? Check out [Han]’s writeup; he explains it better than we can, complete with how to wire the ferrites and oscilloscope traces to explain why you want to wiring them that way. It may be the most complete design brief to be written about magnetic core memory to be written this decade.

This little memory pack [Han] built with this information is rock-solid: it ran for 24 hours straight, undergoing multiple continuous memory tests — a total of several gigabytes of information, with zero errors. That was always the strength of ferrite memory, though, along with the fact you can lose power and keep your data. In in the retrocomputer world, 512 bits doesn’t seem like much, but it’s enough to play with. We’ve even featured smaller magnetic core modules, like the Core 64. (No prize if you guess how many bits that is.) One could be excused for considering them toys; in the old days, you’d have had cabinets full of these sorts of hand-wound memory cards.

Magnetic core memory should not be confused with core-rope memory, which was a ROM solution of similar vintage. The legendary Apollo Guidance Computer used both.

We’d love to see a hack that makes real use of these pre-modern memory modality– if you know of one, send in a tip.


From Blog – Hackaday via this RSS feed

 

This isn’t the first state court to reach this conclusion, but so few courts bother to examine the science-y sounding stuff cops trot out as “evidence” that this decision is worth noting.

There’s no shortage of junk science that has been (and continues to be) treated as actual science during testimony, ranging from the DNA “gold standard” to seriously weird shit like “I can identify a suspect by the creases in his jeans.”

Anyone who’s watched a cop show has seen a detective slide a pen into a shell casing and place it gently in an evidence bag. At some point, a microscope gets involved and the prosecutor (or witness) declares affirmatively that the markings on the casing match the barrel of the murder weapon. Musical stings, ad breaks, and tidy episode wrap-ups ensue.

Maryland’s top court dismantled these delusions back in 2023 by actually bothering to dig into the supposed science behind bullet/cartridge matching. When it gazed behind the curtain, it found ATFE (Association of Firearm and Tool Mark Examiners) and its methods more than a little questionable.

To sum up (a huge task, considering this was delivered in a 128-page opinion), ATFE’s science was little more than confirmation bias. When trainees were tested, they knew one of the items they examined came from the gun used in the test. When blind testing was utilized, the nearly 80% “success” rate in matches dropped precipitously.

He observed, however, that if inconclusives were counted as errors, the error rate from that study would “balloon[]” to over 30%. In discussing the Ames II Study, he similarly opined that inconclusive responses should be counted as errors. By not doing so, he contended, the researchers had artificially reduced their error rates and allowed test participants to boost their scores.By his calculation, when accounting for inconclusive answers, the overall error rate of the Ames II Study was 53% for bullet comparisons and 44% for cartridge case comparisons—essentially the same as “flipping a coin.”

From “pretty sure” to a coin flip. Not exactly the standard expected from supposed forensic science. And that’s common across most cop forensics. When blind testing is used, error rates soar and stuff that’s supposed to be evidence looks a whole lot more like guesswork.

The same conclusion is reached here by the Oregon Court of Appeals, which ultimately reverses the lower court’s refusal to suppress this so-called evidence.

This opinion [PDF] only runs 43 pages, but it makes the same points, albeit a bit more concisely. As the lead off to the deep dive makes clear, cartridge matching isn’t science. It’s just a bunch of people looking at stuff and drawing their own conclusions.

As we will explain, in this case, the state did not meet its burden to show that the AFTE method is scientifically valid, that is, that it is capable of measuring what it purports to measure and is able to produce consistent results when replicated. That is so because the method does not actually measure the degree of correspondence between shell cases or bullets; rather, the practitioner’s decision on whether the degree of correspondence indicates a match ultimately depends entirely on subjective, unarticulated standards and criteria arrived at through the training and individualized experience of the practitioner.

For a similar reason, the state did not show that the method is replicable and therefore reliable: The method does not produce consistent results when replicated because it cannot be replicated. Multiple practitioners may analyze the same items and reach the same result, but each practitioner reaches that result based on application of their own subjective and unarticulated standards, not application of the same standards.

That’s a huge problem. Evidentiary standards exist for a reason. No court would allow people to take the stand and speculate wildly about whether or not any evidence exists that substantiates criminal charges. Tossing a lab coat over a bunch of speculation doesn’t suddenly make subjective takes on bullet markings “science.” And continuing to present this guesswork with any level of certainty perverts the course of justice.

[W]hen presented as scientific evidence, AFTE identification evidence—an “identification” purportedly derived from application of forensic science—impairs, rather than helps, the truthfinding process because it presents as scientific a conclusion that, in reality, is a subjective judgment of the examiner based only on the examiner’s training and experience and not on any objective standards or criteria.

In an effort to salvage this evidence, the government claimed the ATFE Journal was self-certifying. In other words, the fact that ATFE published this journal was evidence in and of itself of the existence of scientific rigor. Both the trial court and the appeals court disagreed:

The court rejected the idea that the AFTE Journal, which the government argued shows that the method is subject to peer review, satisfies that factor for two reasons: because the AFTE Journal “is a trade publication, meant only for industry insiders, not the scientific community,” and, more importantly, because “the purpose of publication in the AFTE Journal is not to review the methodology for flaws but to review studies for their adherence to the methodology.”

The ruling quotes many of the same studies cited by the Maryland court in its 2023 decision — the blind studies that made it clear cartridge matching is mostly guesswork. This court arrives at the same conclusion:

[T]he AFTE method, undertaken by a trained examiner, may be effective at identifying matches, but the problem is that, from what was in the record before the court, the analysis is based on training and experience— ultimately, hunches—not science

To sum up, this method lacks anything that could be considered sound science:

Neither the AFTE theory nor the AFTE method prescribes or quantifies what the examiner is looking for; the examiner is looking for sufficient agreement, which is defined only by their own personal identification criteria.

Having arrived at this conclusion, the court does what it has to do. It reverses the lower court’s dismissal of the suspect’s suppression motion. The “error” of putting this “evidence” on the record was far from harmless. The state has already announced it plans to appeal this decision, but for now, investigators hoping shell markings will help them close some cases might want to dig a little deeper in the evidence locker.


From Techdirt via this RSS feed

 

Recently AI risk and benefit evaluation company METR ran a randomized control test (RCT) on a gaggle of experienced open source developers to gain objective data on how the use of LLMs affects their productivity. Their findings were that using LLM-based tools like Cursor Pro with Claude 3.5/3.7 Sonnet reduced productivity by about 19%, with the full study by [Joel Becker] et al. available as PDF.

This study was also intended to establish a methodology to assess the impact from introducing LLM-based tools in software development. In the RCT, 16 experienced open source software developers were given 246 tasks, after which their effective performance was evaluated.

A large focus of the methodology was on creating realistic scenarios instead of using canned benchmarks. This included adding features to code, bug fixes and refactoring, much as they would do in the work on their respective open source projects. The observed increase in the time it took to complete tasks with the LLM’s assistance was found to be likely due to a range of factors, including over-optimism about the LLM tool capabilities, LLMs interfering with existing knowledge on the codebase, poor LLM performance on large codebases, low reliability of the generated code and the LLM doing very poorly on using tactic knowledge and context.

Although METR suggests that this poor showing may improve over time, it seems fair to argue whether LLM coding tools are at all a useful coding partner.


From Blog – Hackaday via this RSS feed

 
 
 
 
 
 
 
 
 
view more: ‹ prev next ›