SVP Technology at First Data Corp; large scale system architecture, infrastructure, tech geek, reading, learning, hiking, GeoCaching, ham radio, married, kids
9510 stories
·
38 followers

Hackers manage – just – to turn Amazon Echo into listening device

1 Share

But it requires custom hardware, firmware and access to your Wi-Fi

DEF CON Hackers have managed to hack Amazon's Echo digital assistant and effectively turn it into a listening device, albeit through a complex and hard-to-reproduce approach.…

Read the whole story
JayM
13 hours ago
reply
Atlanta, GA
Share this story
Delete

Microsoft ADFS Vulnerability Lets Attackers Bypass MFA

1 Comment
The flaw lets an attacker use the same second factor to bypass multifactor authentication for any account on the same ADFS service.

Read the whole story
JayM
13 hours ago
reply
Ug
Atlanta, GA
Share this story
Delete

Three more data-leaking security holes found in Intel chips as designers swap security for speed

1 Comment

Apps, kernels, virtual machines, SGX, SMM at risk from attack

Intel will today disclose three more vulnerabilities in its processors that can be exploited by malware and malicious virtual machines to potentially steal secret information from computer memory.…

Read the whole story
JayM
13 hours ago
reply
Argh
Atlanta, GA
HarlandCorbin
7 hours ago
*looks at laptop* Oh crap, i forgot this one was intel-based...
Share this story
Delete

Starting a Series of blog posts on Wireless and edge/micro data centers, Part 1

1 Comment

In 2010 I wrote about containers being put at Cell Tower sites. Over the past couple of years there has been lots of excitement about edge/micro data centers.

one interesting pain point for why cell site IT infrastructure needs to be improved is the sites have a PUE of 2.0. https://www.zdnet.com/article/what-is-5g-everything-you-need-to-know/

Cooling and the costs associated with facilitating and managing cooling equipment, according to studies from analysts and telcos worldwide, account for more than half of telcos' total expenses for operating their wireless networks. Global warming (which, from the perspective of meteorological instrumentation, is indisputable) is a direct contributor to compound annual increases in wireless network costs. Ironically, as this 2017 study by China's National Science Foundation asserts, the act of cooling 4G LTE equipment alone may contribute as much as 2 percent to the entire global warming problem.

THE WORLD'S BIGGEST EXAMPLE

180430-02-china-mobile-bs-cost-estimates.jpg

 

China Mobile's breakdown of its annual capital and operational expenditures for maintaining one 3G base station. 

(Image: China Mobile)

To fund 5G deployments is a strategy to dramatically reduce the cost of cell site infrastructure.

Moving BBU processing to the cloud eliminates an entire base transmission system (BTS) equipment room from the base station (BS). It also completely abolishes the principal source of heat generation inside the BS, making it feasible for much, if not all, of the remaining equipment to be cooled passively — literally, by exposure to the open air. The configuration of that equipment could then be optimized, like the 5G trial transmitter shown above, constructed by Ericsson for Japan’s NTT DOCOMO. The goal for this optimization is to reduce a single site’s power consumption by over 75 percent.

What’s more, it takes less money to rent the site for a smaller base station than for a large one. Granted, China may have a unique concept of the real estate market compared to other countries. Nevertheless, China Mobile’s figures show that rental fees with C-RAN were reduced by over 71 percent, contributing to a total operational expenditure (OpEx) reduction for the entire base station site of 53 percent.
— https://www.zdnet.com/article/what-is-5g-everything-you-need-to-know/

With the power consumption problem of cell sites and the drive to change the cell site hardware infrastructure to be cloud based supporting a range of 40km, how many edge data centers are needed for a given area?

Having fewer cloud cell sites supporting multiple towers looks like the direction. When I wrote about containers at cell sites in 2010 I also imagined a container supporting multiple cell towers.

Some people get excited about low latency being on the edge. Urs Hoelzle at one of the last Structure events made the observation that people are over estimating the business value of latency. Will users pay for sub 5 ms latency or is 10 ms fine. Light travels 300,000 meters (186 miles) in 1 millisecond.

Read the whole story
JayM
18 hours ago
reply
And in fiber light travels about 128-miles in 1ms... or about 64-miles round trip in 1ms... or right 40ms RTT Wilmington, NC to Los Angeles, CA following the 2555-mile I40 path... assuming no equipment between.
Atlanta, GA
Share this story
Delete

CFO Series: An Executive View of Lean and Agile IT

1 Comment and 2 Shares

Over the last two decades, the IT profession has developed new ways of working that are intended to deliver better business value more quickly and at lower risk. Or as Jonathan Smart of Barclay’s likes to say, “Better, Faster, Safer, Happier.”[1]There are buzzwords associated with these techniques, of course, as with everything in IT—in this case, Agile, Lean, and DevOps are the terms to know. Unfortunately, these techniques are often presented as IT-focused, with unclear benefits for the enterprise. They even seem to bring with them a danger that they might undermine the CFO’s or CEO’s ability to oversee IT-related initiatives.

They do nothing of the sort. Agile, Lean, and DevOps are ways of delivering business value, streamlining digital delivery, fostering innovation, and making the enterprise nimbler. I believe that they are the best thing that has happened to CFOs since the invention of the spreadsheet, helping them to increase returns, oversee investments, implement controls, gain transparency, provide better data to the enterprise, and, of course, manage costs.

Lean

Lean IT delivery is based on the same concepts as Lean Manufacturing and Lean Six Sigma, all imported from the Toyota Production System that pioneered them. The general idea of Lean is to eliminate waste by concentrating on reducing lead times. When an enterprise adopts a Lean point of view, it maps out the processes it uses to deliver value and examines each step to find waste that makes the process take longer than necessary. By eliminating each type of waste, a business can shorten its lead times and reduce its costs.

IT delivery is amenable to a Lean approach, although IT must be thought of as a product development process rather than a manufacturing process (that is, a process that is different every time it is performed). For this reason, six sigma techniques, which try to reduce variance, are not generally applicable. But as with other business processes, delivering IT capabilities involves a series of steps, each of which often contains waste. The typical sources of waste in an IT process correspond fairly closely to those in a manufacturing process. Authors Mary and Tom Poppendieck have identified them as partially done work, extra processes, extra features, task switching, waiting, motion, and defects.[2]

So, why is it so important to eliminate waste and shorten lead times in IT delivery? One reason is to reduce time-to-market, of course; or for internal use capabilities, the time until value can be harvested from an investment. Another reason is that speed helps make sure IT capabilities are as effective as possible. IT teams can now quickly deliver minimal viable products to users, check to make sure they are accomplishing what they should, and then continue adding features or make changes. Speed creates business agility. Because capabilities are constantly being finished, IT is able to pivot and work on other things that become more important without wasting any of their previous work. And finally, speed reduces risk, both because unforeseen events can be quickly addressed and because the delivery risk of IT capabilities is reduced.

The value stream for delivering an IT capability is different in every organization, but typically it includes steps like this: the business need is recognized and expressed, requirements are written, a plan is created based on the requirements, a business plan is prepared, a governance process acts on the business plan, resources are acquired, software is developed, software is tested, security is verified, software is deployed, users are trained, and the capability is launched. This is a long process—and it can hide a great deal of waste. Much of the process (and the potential for waste) is outside the direct control of the IT organization, or it is at the interface between it and the rest of the business. A careful look at the processes informed by Lean software delivery techniques can make a substantial difference in business outcomes.


Agile

Agile IT delivery is based on a simple principle: in a complex environment (which IT delivery certainly is) it is better to learn and adjust than to strictly follow a plan made in advance. (I’ll explain in a moment how this is connected with Lean.) The idea of deliberately diverging from a plan might sound dangerous. It seems like it would be impossible to control and impossible to hold people accountable. But in fact it is not. It is rigorously controlled but through very different mechanisms.

I sometimes like to think of Agile techniques in terms of risk mitigation. If we make a detailed plan that covers—let’s say—a three-year project and then try to implement it, we are accepting a number of large risks, notably (1) the risk that the plan will have mistakes, (2) the risk that circumstances will change over those three years, and (3) the delivery risk that in three years the product will not have been completed. In the traditional way of delivering IT (the so-called Waterfall, or Gantt chart-obsessed approach), the product is not delivered until the end of the project, so the entire amount of the investment is at risk until the end of the three years when the results become visible (or not).

How serious are these risks? Very.

  • Detailed plans for IT systems are always wrong, as we have found in our experience over multiple decades. Studies have also shown that more than half of the features requested will rarely or never be used. And even if everything is delivered according to plan, it still might not meet the business need it was intended to address!
  • The amount of change we expect over the three years depends on the amount of uncertainty in the business and technical environments, and we happen to be in a time of fast change and high uncertainty. In the course of three years, startups are launched and disrupt entire industries. Agility is the ability of the business to respond to change, and it is contrary to sticking with a plan, yet extremely valuable.
  • A Microsoft study showed that only one-third of ideas actually have the intended result, another one-third have the opposite effect, and the last third are neutral. We have all seen cases where an IT system was supposed to reduce costs but didn’t, or was supposed to increase revenues but didn’t.[3]Another study found that companies are wasting nearly $400 billion per year on digital initiatives that don’t deliver their expected return.[4]
  • Numerous studies have shown that the larger an IT project is, the higher its risk of not delivering. A short increment of work is more likely to accomplish its goal and is more predictable.

Agile practices allow for constant learning and adjustment, working in small cycles to finish product and give it to users for feedback. It is practiced by small, autonomous, self-contained teams that can quickly learn and adjust with minimal ceremony. Because Agile teams are trying to finish product quickly, it is natural to combine Agile principles with Lean practices, which focus on reducing cycle times. The combination of the two gives businesses agility, risk mitigation, and cost-effectiveness.


DevOps

In Lean theory, two important sources of waste are handoffs (motion and waiting) and large batch sizes. Traditional IT practices were based on handoffs between development, testing, and operations, who deployed the product. DevOps, the state-of-the-art set of practices in Lean and Agile IT, addresses these handoffs by combining development, testing, and operations skills on a single, small team accountable as a whole for results.

Large batch sizes in IT are large groups of requirements. A DevOps team processes only a small set of requirements at a time, using a highly automated process to deploy capabilities to users quickly before moving on to another small set of requirements. As a result, DevOps teams deploy code very often—sometime hundreds or thousands of times a day.

The heavy use of automation in DevOps has benefits for controls and compliance. Many of the organizational controls that would have been operated manually can now be automated, making them more reliable and easily documentable, and allowing them to be applied continuously rather than periodically.

DevOps is an excellent way to foster innovation. With it, new ideas can be tested quickly, inexpensively, and at low risk. Working in the cloud enhances these benefits: infrastructure can be provisioned instantly and then later be de-provisioned. IT capabilities that would take the enterprise a long time to build can be accessed as pre-created building blocks and incorporated into experiments.

So what are the business implications of Agile, Lean, and DevOps practices?

  • Fast time to market or time to value for internal use products
  • Less waste from producing unneeded capabilities
  • Less waste from producing capabilities that do not accomplish objectives
  • Less waste in processes (both inside and outside of IT)
  • Reduced risk
  • Increased innovation
  • Better operational controls through automation

Let’s return to the issues of control. In the traditional approach to overseeing IT initiatives, governance is primarily an upfront matter: once a go/no-go decision is made, overseers are generally uninvolved unless the project breaches thresholds or until certain milestones are reached. It is a discreteoversight process, popping into the picture at intervals but otherwise absent. You can think of the Agile approach as one of continuous oversightand transparency. The project team delivers frequently and results are apparent as those deliveries are made. Because of the agility of the process, the oversight body can choose to change direction at any moment, end the investment, increase it, or substitute other objectives.

With an Agile process the enterprise can fix and hold to a schedule or budget; it simply trades off scope to meet the schedule or budget. I suggest holding most cost categories fixed, just as one does with a budget. Budgets place a cap on what an organizational unit can do in a single budget cycle—once you run out of budget you stop spending. It’s the same with so-called “requirements.” (So-called because they are not really “required” but subject to budget availability!) If money runs out during an Agile IT project, then nothing has been lost because the work that has been finished is usable. In fact, the initiative can be terminated early, even if budget remains, based on changing priorities or because enough success has already been achieved. Or the enterprise can make a conscious decision to increase the budget to implement the remaining features. Agile is a continuous investment process, where the business case is (effectively) recalculated every moment.

Agile approaches place a high value on adjusting a plan as information becomes available, and a low value on conformance to the plan per se. But that doesn’t mean that it is uncontrolled. I think of it as being controlled by conformance to business objectives. As the project progresses, features are rolled out, their business impact is gauged, and the project is adjusted based on its impact on the intended objectives. The object is to get the best return from whatever is invested.

But you have to have agreement on what that “return” is intended to be. Cost savings? Increased revenue? Better customer service? Long-term agility? These are all valid goals. On the other hand, meeting all of the requirements specified at the beginning of a project is not a real business objective, nor is building a particular set of features. A healthy project is one that meets its objectives, not one that finishes all its requirements. It should continuously adapt in order to best accomplish those objectives, given reality.

CFOs should be as excited about these new IT approaches as CIOs are. They provide ways to get better results for the enterprise by taking advantage of what is now possible with IT tools.

[1]https://www.home.barclays/news/2018/02/jonathan-smart.html

[2]Poppendieck, Mary and Tom Poppendieck. Lean Software Development: An Agile Toolkit, p. 4. These correspond to the classic sources of waste in Lean theory: inventory, extra processing, overproduction, transportation, waiting, motion, and defects.

[3]Humble, Jez, Lean Enterprise, p.179

[4]http://www.genpact.com/insight/article/cfo-challenges-in-a-digital-world

Read the whole story
JayM
1 day ago
reply
Yeap
Atlanta, GA
Share this story
Delete

Encrypting NFSv4 with Stunnel TLS

1 Share

NFS clients and servers push file traffic over clear-text connections in the default configuration, which is incompatible with sensitive data. TLS can wrap this traffic, finally bringing protocol security. Before you use your cloud provider's NFS tools, review all of your NFS usage and secure it where necessary.

The Network File System (NFS) is the most popular file-sharing protocol in UNIX. Decades old and predating Linux, the most modern v4 releases are easily firewalled and offer nearly everything required for seamless manipulation of remote files as if they were local.

The most obvious feature missing from NFSv4 is native, standalone encryption. Absent Kerberos, the protocol operates only in clear text, and this presents an unacceptable security risk in modern settings. NFS is hardly alone in this shortcoming, as I have already covered clear-text SMB in a previous article. Compared to SMB, NFS over stunnel offers better encryption (likely AES-GCM if used with a modern OpenSSL) on a wider array of OS versions, with no pressure in the protocol to purchase paid updates or newer OS releases.

NFS is an extremely common NAS protocol, and extensive support is available for it in cloud storage. Although Amazon EC2 supports clear-text and encrypted NFS, Google Cloud makes no mention of data security in its documented procedures, and major initiatives for the protocol recently have been launched by Microsoft Azure and Oracle Cloud that raise suspicion. When using these features over untrusted networks (even within the hosting provider), it must be assumed that vulnerable traffic will be captured, stored and reconstituted by hostile parties should they have the slightest interest in the content. Fortunately, wrapping TCP-based NFS with TLS encryption via stunnel, while not obvious, is straightforward.

The performance penalty for tunneling NFS over stunnel is surprisingly small—transferring an Oracle Linux Installation ISO over an encrypted NFSv4.2 connection is well within 5% of the speed of clear text. Even more stunning is the performance of fuse-sshfs, which appears to beat even clear-text NFSv4.2 in transfer speed. NFS remains superior to sshfs in reliability, dynamic idmap and resilience, but FUSE and OpenSSH delivered far greater performance than expected.

Read the whole story
JayM
1 day ago
reply
Atlanta, GA
Share this story
Delete
Next Page of Stories