SVP Technology at Fiserv; large scale system architecture/infrastructure, tech geek, reading, learning, hiking, GeoCaching, ham radio, married, kids
16070 stories
·
145 followers

Trump Fires Cyber Safety Board Investigating Salt Typhoon Hackers

1 Comment
In a letter sent today, the acting DHS secretary terminated membership to all advisory boards, including the Cyber Safety Review Board (CSRB) tasked with investigating state-sponsored cyber threats against the US.

Read the whole story
JayM
5 hours ago
reply
Jeesh. Just got a briefing on Salt Typhoon last week... not sure if those researchers were on the advisory board or not.. but just wow. Not good.
Atlanta, GA
Share this story
Delete

Netflix announces price increase for all three of its plans

1 Comment

Netflix is hiking its prices once again. Starting today, the cheapest Netflix plan will cost $7.99 per month, while the top-of-the-line plan with 4K streaming will cost you a cool $24.99 monthly.

more…
Read the whole story
JayM
6 hours ago
reply
$25. Ouch. We ditched Netflix for SEVERAL years when the switched to streaming at first. Eventually they had enough shows that we picked it back up in '21 or '22... I'm not getting rid of it... but jeesh. $25... I think that is as high as I'll go, next raise, time to cancel again.
Atlanta, GA
Share this story
Delete

Mercator: Extreme

1 Comment
Comments
Read the whole story
JayM
6 hours ago
reply
Weird.
Atlanta, GA
Share this story
Delete

The EU’s AI Act

1 Comment

Have you ever been in a group project where one person decided to take a shortcut, and suddenly, everyone ended up under stricter rules? That’s essentially what the EU is saying to tech companies with the AI Act: “Because some of you couldn’t resist being creepy, we now have to regulate everything.” This legislation isn’t just a slap on the wrist—it’s a line in the sand for the future of ethical AI.

Here’s what went wrong, what the EU is doing about it, and how businesses can adapt without losing their edge.

When AI Went Too Far: The Stories We’d Like to Forget

Target and the Teen Pregnancy Reveal

One of the most infamous examples of AI gone wrong happened back in 2012, when Target used predictive analytics to market to pregnant customers. By analyzing shopping habits—think unscented lotion and prenatal vitamins—they managed to identify a teenage girl as pregnant before she told her family. Imagine her father’s reaction when baby coupons started arriving in the mail. It wasn’t just invasive; it was a wake-up call about how much data we hand over without realizing it. (Read more)

Clearview AI and the Privacy Problem

On the law enforcement front, tools like Clearview AI created a massive facial recognition database by scraping billions of images from the internet. Police departments used it to identify suspects, but it didn’t take long for privacy advocates to cry foul. People discovered their faces were part of this database without consent, and lawsuits followed. This wasn’t just a misstep—it was a full-blown controversy about surveillance overreach. (Learn more)

The EU’s AI Act: Laying Down the Law

The EU has had enough of these oversteps. Enter the AI Act: the first major legislation of its kind, categorizing AI systems into four risk levels:

  1. Minimal Risk: Chatbots that recommend books—low stakes, little oversight.
  2. Limited Risk: Systems like AI-powered spam filters, requiring transparency but little more.
  3. High Risk: This is where things get serious—AI used in hiring, law enforcement, or medical devices. These systems must meet stringent requirements for transparency, human oversight, and fairness.
  4. Unacceptable Risk: Think dystopian sci-fi—social scoring systems or manipulative algorithms that exploit vulnerabilities. These are outright banned.

For companies operating high-risk AI, the EU demands a new level of accountability. That means documenting how systems work, ensuring explainability, and submitting to audits. If you don’t comply, the fines are enormous—up to €35 million or 7% of global annual revenue, whichever is higher.

Why This Matters (and Why It’s Complicated)

The Act is about more than just fines. It’s the EU saying, “We want AI, but we want it to be trustworthy.” At its heart, this is a “don’t be evil” moment, but achieving that balance is tricky.

On one hand, the rules make sense. Who wouldn’t want guardrails around AI systems making decisions about hiring or healthcare? But on the other hand, compliance is costly, especially for smaller companies. Without careful implementation, these regulations could unintentionally stifle innovation, leaving only the big players standing.

Innovating Without Breaking the Rules

For companies, the EU’s AI Act is both a challenge and an opportunity. Yes, it’s more work, but leaning into these regulations now could position your business as a leader in ethical AI. Here’s how:

  • Audit Your AI Systems: Start with a clear inventory. Which of your systems fall into the EU’s risk categories? If you don’t know, it’s time for a third-party assessment.
  • Build Transparency Into Your Processes: Treat documentation and explainability as non-negotiables. Think of it as labeling every ingredient in your product—customers and regulators will thank you.
  • Engage Early With Regulators: The rules aren’t static, and you have a voice. Collaborate with policymakers to shape guidelines that balance innovation and ethics.
  • Invest in Ethics by Design: Make ethical considerations part of your development process from day one. Partner with ethicists and diverse stakeholders to identify potential issues early.
  • Stay Dynamic: AI evolves fast, and so do regulations. Build flexibility into your systems so you can adapt without overhauling everything.

The Bottom Line

The EU’s AI Act isn’t about stifling progress; it’s about creating a framework for responsible innovation. It’s a reaction to the bad actors who’ve made AI feel invasive rather than empowering. By stepping up now—auditing systems, prioritizing transparency, and engaging with regulators—companies can turn this challenge into a competitive advantage.

The message from the EU is clear: if you want a seat at the table, you need to bring something trustworthy. This isn’t about “nice-to-have” compliance; it’s about building a future where AI works for people, not at their expense.

And if we do it right this time? Maybe we really can have nice things.

The post The EU’s AI Act appeared first on Gigaom.

Read the whole story
JayM
13 hours ago
reply
Seems to make sense at first read of this Gigaom article.
Atlanta, GA
HarlandCorbin
13 hours ago
I love that the fine is X Euros, or 7% of global annual revenue, whichever is higher. We *NEED* more fines to be like this.
JayM
12 hours ago
Absolutely agree. On the invidual level as well. I hate to admit it... but HOV fines are a great example. $250 fine, heck $500 fine... for a lot of people is a deterrant, for others it is simply a Road Toll fee that only gets levied sometimes.
Share this story
Delete

'Big boy' spider becomes Australia's largest and deadliest arachnid after surprise discovery

1 Comment and 2 Shares
The Sydney funnel-web spider has extremely dangerous venom, but according to a new study this spider is actually three different species — one of which, the "Newcastle big boy," is much larger.

Read the whole story
JayM
1 day ago
reply
Atlanta, GA
Share this story
Delete
1 public comment
fxer
21 hours ago
reply
Three times the nightmare fuel
Bend, Oregon

Concise Link Descriptions in netlab Topologies (Part 1)

1 Share

One of the goals we’re always trying to achieve when developing netlab features is to make the lab topologies as concise as possible1. Among other things, netlab supports numerous ways of describing links between lab devices, allowing you to be as succinct as possible.

A bit of a background first:

  • In the end, netlab collects all links in the links list before starting the data transformation process.
  • Every entry in the links list is a dictionary. That dictionary can contain link attributes and must contain a list of interfaces connected to the link.
  • Every interface must have a node (specifying the lab device it belongs to) and could contain additional interface attributes.
Read the whole story
JayM
1 day ago
reply
Atlanta, GA
Share this story
Delete
Next Page of Stories