The week in Windows 10: Clean installs on upgrades, new push for PC/Xbox integration
This week, Microsoft at last confirmed the Windows 10 upgrades we’ll all be receiving in a few short weeks will come with an important feature: Once you upgrade to the new version of Windows, your authenticated copy of Windows 10 will provide you with a key that can be used to perform clean installs rather than an in-place upgrade. After this initial process, you won’t have to keep your old key, drop a CD in the drive, or otherwise mirror the original Windows installation.This suggests Microsoft will effectively allow people to redeem previous Windows keys for new Windows 10 iterations, which will then be tied to whatever version of the OS you’re qualified to own. There’s no word yet on what happens if, for example, you redeem a legitimate Windows 7 key for a new Windows 10 key, then upgrade the underlying hardware in the box. Gamers with legitimate copies of Windows 7 or 8/8.1 may want to do any upgrades they intend to make before actually installing the new version of Windows 10 — it’s possible that upgrading the hardware post-OS install will lock the key to a particular platform and make it more difficult to swap components in the future.
When Microsoft launched its last software-as-a-service push with Office 365, it simultaneously introduced requirements that made reinstallation and upgrades more onerous than they had been previously. Specifically, it became (temporarily) against the terms of service to remove the software from one installed system and put it on another. The company later modified this wording after consumer pressure, but the terms and conditions surrounding Windows 10 and its upgrade / reinstallation policies still haven’t been clarified.
Phil Spencer promises unified Xbox One, Windows 10 gaming experience
Meanwhile, Microsoft has announced it will support PC Gamer’s efforts to host a PC gaming-themed event at E3. For years, PC gaming has been all-but ignored — Valve’s Steam has hauled in hundreds of millions of dollars over the last ten years, while Microsoft’s efforts to build a credible Windows Store or its own Games for Windows Live platform have been wretched by comparison. Xbox head Phil Spencer has made it clear he wants to change this and outlined a plan to offer Xbox as a multi-platform gaming brand that would extend across both PCs and the Xbox One itself. Details on the plan are still vague, though Microsoft has promised that Windows 10 device owners who own Xboxes and compatible networking hardware will be able to stream games from Xbox to PC across local networks.While Spencer told PC Gamer that PC gaming is now much more important to Microsoft than it was in the past, his explanation of what’s changed between the old Games for Windows days and the modern era could raise flags for some:
“The key difference now is that the Xbox team is driving the Windows and console gaming efforts as one connected ecosystem,” Spencer said. “Games and gaming is front and center in our device and service strategy at Microsoft. I can tell you definitively that our team has never committed more resources to making Windows better for game developers and gamers, and that means any gamer on Windows 10, regardless of storefront or device.”
While Xbox One integration and cross-platform play could be great for both console and PC gaming, hearing that Xbox is driving the PC gaming bus isn’t necessarily good news. On the one hand, the Xbox One is a PC for all intents and purposes. In previous generations, the gaps between consoles and PCs were firmly rooted in both hardware and software, the first Xbox notwithstanding. Today, the Xbox One is a PC — albeit a PC with some specialized internal components and a low-level OS variant based on Windows 8. There’s no reason why Microsoft can’t merge many aspects of gaming and content sharing across platforms to the betterment of both. Spencer, for example, leaves open the idea of bringing Xbox One exclusives over to Windows 10 and promoting both platforms simultaneously.
What’s going to be trickier, however, is offering PC gamers features that take advantage of the fact that a game is running on a PC. Many of the top-tier releases of the past few years that debuted for both platforms have UIs and interfaces clearly designed for console gamers first. Games like Skyrim are designed for low-information user interfaces that are murder to sort through or work with. Games designed to be played with a monitor at a distance of 2-4 feet have different guidelines and rules than titles designed for televisions at a distance of 6-8 feet. That doesn’t make one “better” than the other — but PC gamers are going to be watching Microsoft to see if the company actually uses its efforts to improve gaming on both platforms, or if it expects PC gamers to roll over and make nice with Xbox.
Microsoft saw the future, but missed creating it
futuristic Microsoft concept videos from 1999 and 2000. In these videos, it’s clear Microsoft did have vision (and continues to), and foresaw much of the evolution and innovation of the past 15 years. And it’s a bit of irony that this was on Slate, which started out as an online magazine on Microsoft’s MSN online service in 1996 (for kicks, check out Slate founding editor Michael Kinsley’s 2006 retrospective here).
At any rate, watch the videos and you’ll see rich online collaboration, smartphones, tablets, location-aware services, voice controlled devices, personalized cloud based content on multiple devices, and more. At the dawn of the millennium, Microsoft was one of the richest companies on the planet, sitting on over $20 billion in cash and continuing to grow revenues at a 25% annual clip. Despite its well documented foibles in many areas, today’s Microsoft continues to be cash rich and extremely profitable. But where did it go wrong?
Much has been written about Microsoft’s missteps of the past 15 years in particular. A large part of the blame has been directed at Steve Ballmer’s leadership of the company since 2000, and the company’s historically competitive culture. In most cases, the CEO of a company gets a disproportionate share of both the credit and blame for a company’s performance. Of course, credit or blame has to go somewhere, and the person at the top is the lightning rod.
But reality is usually more complicated than that. Most companies that once dominated their core markets, like IBM and Microsoft, also tend to be criticized for not being innovators. People say they’re just followers, adapters of others’ innovations, and better marketers. As companies build huge businesses, they tend to take less risks with them. While there is some truth to the innovation criticism, the real story is more nuanced.
The history of computing shows that one company rarely gets to dominate the next great technology shift. IBM dominated mainframes, successfully weathered the minicomputer wave, and created the PC architecture and market that opened the door for Intel and Microsoft. But IBM didn’t dominate these other businesses in the same way as mainframes. Microsoft dominated the market for PC operating systems, extended that dominance into PC applications, and successfully weathered the initial shift of computing to the Internet. But it failed to extend that dominance to Web services, mobile devices, cloud computing, or even gaming — despite investing tens of billions in those areas in the past two decades.
Micron announces new, 16nm TLC NAND
Micron has announced that it intends to ship a new, triple-level cell (TLC) NAND type for consumer devices, including SSDs. These new TLC devices will use the company’s cutting-edge 16nm NAND and, as the name implies, can store up to three bits of information per cell, up from two bits in standard MLC. The new TLC NAND will ship in 16GB capacity chips like previous MLC designs, but the dies are 28% smaller than the MLC variants.
Micron has previously produced TLC at higher process nodes, but the company hasn’t shipped it in consumer SSDs to-date.
The uncertain history of TLC
In theory, TLC drives offer the best of both worlds — improved storage densities for consumers, and superior prices for OEMs, thanks to higher storage capacities and lower chip costs. In practice, things haven’t turned out that way. The more bits of data you store per cell of NAND, the more charge levels you have to be able to distinguish. An SLC cell has two “values” — 0 or 1. An MLC cell, which stores two bits, has four values — we can represent them as 00, 01, 10, and 11. A TLC NAND cell, with three bits, has eight total values that must be stored.NSA uses warrantless Web surveillance to hunt hackers
NSA and other government agencies, there’s been an ongoing debate over the nature and limits that should be placed on such surveillance. One of the most troubling trends exposed in repeated leaks is the degree to which the government has exceeded the enormous authority granted it by the Patriot Act and other legislation. New information, available today, is going to reignite that argument. Days after the Senate voted to reauthorize the Patriot Act with some modest modifications, details have leaked on how the Obama Administration authorized the NSA to search the Internet for evidence of malicious hacking, even when there were no obvious ties between the alleged hackers and any international groups.
According to a joint investigation between the New York Times and Pro Publica, the Justice Department authorized the NSA to hunt for hackers without a warrant, even when those hackers were present on American soil. Initially, the DOJ authorized the NSA to gather only addresses and “cybersignatures” that corresponded to computer intrusions, so that it could tie the efforts to specific foreign governments. The NSA, however, sought permission to push this envelope. These new slides also note, incidentally, that Dropbox was targeted for addition to the PRISM program.
The US army wants uniforms that make soldiers invisible
The US army wants to develop uniforms that will make its troops "invisible" on the battlefield within the next 18 months.
The military wants to test the latest wearable camouflage technologies, and has put out a call for proposals from companies manufacturing "metamaterials". These highly adaptive structures, first demonstrated in 2006, can bend light around the wearer, essentially making them "invisible" from certain angles.
Since
then, the technology has been held back by technical restraints, as
many of these materials have only worked successfully in a lab under
specific conditions and certain spectrums.
But now the military is actively looking to make the technology work -- if not hiding its troops completely, then at least making them "shadowless" in certain wavelengths -- and is turning to lab-created materials for the solution. According to the army's specifications, it's looking to create invisibility
uniforms that work in all temperatures and weather conditions and don't
require a power supply -- and can still be integrated with the rest of a
soldier's equipment.
However, it's unlikely that a truly invisible army will be put into action anytime soon. Speaking to the New Scientist, Martin Wegener from Germany's Karlsruhe Institute of Technology, said: "Complete
invisibility of macroscopic objects for all visible colours is
fundamentally impossible." His team has created "invisibility cloaks"
from photonic crystals -- but they only work on certain wavelengths.Why we should not fear AI. Yet
I once heard
artificial intelligence described as making computers that behave like
the ones in the movies. Hollywood's computers and robots are invariably
mad, bad and dangerous to know. So when we are warned about the emerging
threat of AI to the future of humanity, people get alarmed.
There are plenty of things to be alarmed about in the modern world, from climate change to terrorism, Ebola
outbreaks to nuclear reactors melting down. However, I am pretty
relaxed about the threat from AI and actually very excited about what
future developments hold.
AI isn't poised to decide that we are
superfluous to requirements and that the planet would be better off
without us. To believe that is to misunderstand the progress we have
made in AI and to forget a basic feature of human design. The rise and
rise of computing power has brought huge advances in our ability to
solve problems that previously seemed beyond machines. Examples include
speech recognition, language translation, defeating human chess
champions, systems taking part in quiz shows,
vehicle navigation and control. The list goes on. However, the
characteristic of this success is what I call the combination of brute
force and a little insight.When Gary Kasparov was beaten by an AI chess-playing programme it happened because of the ability of the machine to review hundreds of thousands of moves deep into the game -- brute force. The program had a large library of stored moves from openings through to end games -- brute force. The program was able to apply weightings to the relative goodness of the moves -- a little insight. The result was a defeat for Kasparov.
A couple of decades earlier we had been told that if ever a computer beat a human world chess champion the age of intelligent machines would have arrived. One did, and it hadn't. And it hasn't arrived today, when Deepmind's latest learning algorithm allow computers to learn to play at super human levels in arcade games.
What we do have are programs and machines that are exquisitely designed to particular tasks -- achieving a level of performance through brute force and insight that makes them look smart. And that brings us to that basic feature of human design. We have evolved such that we are disposed to see intelligence, other minds and agency in the world around us. A three-year-old child tows a piece of paper around and treats it as their pet. MIT students sitting entranced by a crude humanoid robot able to raise its eyebrows, effect a frown or look sad by implementing a simple set of rules displaying responses that were triggered by the behaviour of the student. Or back to Kasparov, who was convinced that the programme that beat him at chess was reading his mind.
The reality is no less exciting. What we are witnessing is the emergence of Ambient Intelligence and Augmented Intelligence. These others senses to the abbreviation AI have been brought about by the encapsulation and compilation into devices -- the Internet of Things and the Web of narrow components of Artificial Intelligence -- programs that are crafted to a particular task and niche.
Augmented Intelligence occurs when we use our global infrastructure of data and machine processing connecting thousands and millions of humans. Witness the recent response to the Nepal earthquake. This is a genuinely exciting prospect where we solve problems together that are beyond any one individual or organisation.
In the meantime there is no doubt we should take the ethics of AI very seriously. There is danger from complex systems with AI algorithms at their heart. The danger is not having humans in the loop to monitor, and if necessary override the system's behaviour. It is not having clear rules of engagement, built into these systems; principles that embody the highest levels of respect for human life and safety.
Isaac
Asimov, who saw the future better than most, gave us a first law of
robotics -- a robot may not injure a human being or, through inaction,
allow a human being to come to harm. This is a great principle to
compile into any complex system.
Robot truck hits public roads for first time
Self-driving freight trucks have been given the go ahead to drive on Nevada's roads.
The
vehicles, developed by German manufacturer Daimler AG, have clocked up
16,000 kilometres in order to get the key to the western state's roads
-- and a full license. "This is not a testing licence," said chief executive Wolfgang Bernhard at a press conference
on 5 May. "We believe that these vehicles and systems are ready."
Daimler then proved its point by driving its new Freightliner
Inspiration vehicles across the Hoover Dam. Two of the vehicles -- which
use a long-range radar, short-range radar and a stereo camera to
operate -- have been given a license to operate on Nevada's public
roads.
By comparison, Google X's own self-driving cars only took to public roads for testing earlier this year, with a spokesperson revealing in March that members of the public may be accepted as passengers in 2015.
While car accidents make up the vast majority of road traffic accidents in the UK, with 785 car passenger fatalities
in the UK in 2013, lorries are increasingly hitting the headlines for
the dangers they pose to cyclists and pedestrians in urban areas. A 2013
study commissioned by the Campaign for Better Transport even found the
vehicles likely had a hand in the rising numbers of fatal traffic
accidents on motorways and A-roads: 52 percent of fatal accidents on
motorways involved HGVs, it said, despite them making up 10 percent of
the traffic; and one in five fatal crashes on A-road involved the
vehicles. The report also estimated that an HGV is five times as likely
to be involved in a fatal accident on a minor road. According to a report by the International Transport Forum,
released this year, global freight traffic will quadruple by 2050, so
it's a good time to start investing in autonomous trucks.Why Bitcoin Could Be Much More Than a Currency
Bitcoin doesn’t have to replace government-backed money to improve the way we do business online.
Boosters of Bitcoin commonly call the digital currency the future of money. But even if it doesn’t turn out to be, a growing group of investors and entrepreneurs is convinced that the idea at the center of Bitcoin could revolutionize industries that rely on digital record keeping. It might replace conventional methods of keeping track of valuable information like contracts, intellectual-property rights, and even online voting results.
Bitcoin’s real promise, they say, is not the currency. It’s the underlying technology, in which thousands of computers in a distributed network use cryptographic techniques to create a permanent, public record of every single Bitcoin transaction that has ever occurred (see “What Bitcoin Is and Why It Matters”). Investors are betting that this record-keeping system, called the blockchain, will be valuable for many other things besides tracking payments.
It’s become common for enthusiasts to compare where Bitcoin is now to where the Internet was in the 1980s and early 1990s. Joel Monegro, a venture capitalist at Union Square Ventures, says the open-source technology that creates the blockchain can be fairly compared to the open-source protocol that is the basis for the Internet, called TCP/IP. Technically a pair of protocols, the Transmission Control Protocol and Internet Protocol, TCP/IP dictates the specific ways data is packaged and routed between computers in a network. For years after TCP/IP was invented, the technology was accessible only to people with a certain level of technical knowledge. Similarly, right now Bitcoin is too arcane for most people and challenging to use even for those who are familiar with it.
Eventually, additional protocols were built that worked with TCP/IP and paved the way for e-mail applications, Web browsers, file transfer services, and so on. Something similar could be brewing now with Bitcoin. The process for making payments with the currency can be thought of as just the first application built on the blockchain system, says Brian Forde, the director of digital currency at MIT’s Media Lab. New protocols being built on top of the foundational technology could lead to easy-to-use consumer products and services.
What might those be? Anything that would benefit from having information stored in an unchangeable database that is not owned or controlled by any single entity and yet is accessible from anywhere at any time.
One early example of such a “Blockchain 2.0” service is Counterparty, which promises its users they can “engage in advanced financial contracts without having to trust anyone else to hold your funds or do your accounting.” In addition to providing a platform for exchanging bitcoins, Counterparty stores important accounting or contractual information in the blockchain.
Another early-stage company taking this general approach is Factom, which is aiming to appeal to a wide range of businesses. Factom says it can “maintain a permanent, time-stamped record” of a company’s data in the blockchain, which will “reduce the cost and complexity of audit trails, managing records, and complying with government regulations.” To test the concept, it has teamed with a medical records company to build a tool health-care providers could use to store data that would be valuable during billing disputes or audits.
The reason the blockchain has the potential to be an all-purpose global database is that as every Bitcoin transaction is verified cryptographically by thousands of computers running the software, they can add a small volume of additional information. Right now that is roughly 40 characters, but there are ways around that limit. A service can build a separate network that can store more information and then use cryptography to encode that information as a string of numbers and letters small enough to be written into the blockchain. Counterparty and Factom employ this basic principle.
One twist, though, is that bitcoins themselves are still inherent to the process: they provide the incentive for people to help make all this happen. Verifying transactions and storing their data in the blockchain earns “miners” newly minted bitcoins. In other words, any service that aims to use the blockchain as a general-purpose database will have to pass a bitcoin (or a fraction of one) around in the process. Or it will have to find some other way to motivate miners to put the information into the ledger.The concept of a blockchain is not limited to Bitcoin, and several other networks have recently emerged as potential alternatives. Indeed, Bitcoin’s blockchain isn’t even necessarily the one that is best equipped to have applications built on top of it. But Bitcoin has gained by far the most traction and has the biggest network, which makes it more resilient than the others, says Monegro.
If it is to become a mainstream technology, Bitcoin’s software will probably need to be adjusted so that it can handle the huge transaction volumes that would be involved (see “The Man Who Really Built Bitcoin”). There are also open questions about how well Bitcoin’s design will hold up over the long term (see “Academics Spy Weaknesses in Bitcoin’s Foundations” and “Price Slump Tests Bitcoin’s Self-Correcting Economics”). Nonetheless, if the future of blockchain technology is not necessarily contingent on the future of Bitcoin, the powerful idea at the currency’s core is likely to be here to stay.
Air Traffic Control for Drones
If large numbers of commercial drones are to take to the skies, they’ll need an air traffic control system.
Drones
at the San Francisco headquarters of Airware. The company will soon
begin flying some of them on NASA bases in California as part of a
project developing an air traffic control system for drones.
How do you keep small drone aircraft safe in the world’s busiest
national airspace? One idea is to have them use cellphone networks to
feed data back to an air traffic control system made just for drones.A startup called Airware is working with NASA on a project exploring how to manage the swarms of commercial drones expected to start appearing in U.S. skies. The four-year program will create a series of prototype air traffic management systems and could shape how widely commercial drones can be used. Airware’s main business is selling control software and hardware to drone manufacturers and operators.
The U.S. Federal Aviation Administration has yet to propose rules to govern the use of commercial robotic aircraft in U.S. skies. But it predicts that 7,500 unmanned craft weighing 55 pounds (25 kilograms) or less will be operating in the U.S. by 2018. There is strong interest from agriculture, mining, and infrastructure companies in using drones for tasks like inspecting crops or gathering geospatial data (see “10 Breakthrough Technologies 2014: Agricultural Drones”).
That could mean gridlock in the skies, or at least increasingly unsafe traffic patterns. “You will have competing interests trying to use the same space,” says Jesse Kallman, head of business development and regulatory affairs at Airware. “Imagine Amazon trying to deliver packages in an area that an energy company is trying to survey their power lines.”
The first prototype to be developed under NASA’s project will be an Internet-based system. Drone operators will file flight plans for approval. The system will use what it knows about other drone flights, weather forecasts, and physical obstacles such as radio masts to give the go-ahead.
Later phases of the project will build more sophisticated systems that can actively manage drone traffic by sending out commands to drones in flight. That could mean directing them to spread out when craft from multiple operators are flying in the same area, or taking action when something goes wrong, such as a drone losing contact with its operator, says Jonathan Downey, CEO of Airware.
If a drone strayed out of its approved area, for example, the system might automatically send a command that made it return to its assigned area, or land immediately. The commands could vary depending on the situation—such as how close the drone is to a populated area—or the size and weight of the aircraft, says Downey. Ultimately, NASA wants its system to do things like automatically steer drones out of the way of a crewed helicopter that unexpectedly passes through.
Getting that to work will require a reliable way for drones to communicate with the traffic system. Airware believes that equipping drones with cellular data connections could be the best option. The equipment that conventional aircraft use to communicate or send digital data to air traffic control systems is too bulky for use on drones.
Airware is set to perform a series of flight and lab tests on different drone craft, ranging from quadcopters to helicopters to fixed wing planes, on a NASA base in California, perhaps as soon as this year. The first stage of testing is aimed at understanding how different craft could respond to commands from a traffic control system.
Ella Atkins, an associate professor of aerospace engineering at the University of Michigan, says that so-called general aviation—unscheduled private flights—pose the most difficulty to integrating drone traffic into U.S. airspace. “The most challenging thing would be to combine a large fleet of Amazon Prime drones carrying packages and the Piper Clubs that just want to punch a hole in the sky on the weekend,” she says.
Atkins says that is as much a regulatory issue as a technological one, and suggests it may be time to reconsider FAA rules written for when only crewed craft took to the skies. Giving drones relatively free reign below an altitude of a few hundred feet, except in the vicinity of airports, would mostly remove conflict between drones and general aviation, she suggests.
Such major changes to FAA rules appear unlikely. People in the nascent commercial drone industry often point out that the U.S. regulator has been slower than its counterparts in other countries to clear the way for commercial drone flights, even just for research. Airware already has customers using its control systems on drones flying over mining operations in France, and inspecting oil rigs in Australia, for example.
However, those countries have not so far begun work on drone traffic control systems. “I’m not familiar with any other system,” says Downey. “This is an area the U.S. has an opportunity to take the lead on.”
No comments:
Post a Comment