Wednesday 25 March 2015

How USB charging works, or how to avoid blowing up your smartphone

Samsung Galaxy S6 Charging
The tech world has finally coalesced around a charging standard, after years of proprietary adapters and ugly wall wart power supplies. Well, sort of: We’re already seeing some fragmentation in terms of the new USB-C connector, which could eventually replace USB, as well as what is thankfully turning out to be a short-lived obsession Samsung had with larger USB Micro-B connectors for its Galaxy line. But aside from that, and with the obvious exception of Apple’s Lightning connector, micro USB has destroyed the industry’s penchant for custom ports.

Ten years ago, you always had to make sure you had the correct power supply for each of your gadgets. Usually, that power supply wasn’t even labeled. Today, you can charge your phone at your friend’s house, plug your Kindle into any computer, and download photos from a digital camera directly to your TV, all thanks to a standardized connector. In its place, though, there’s a new problem: USB power. Not all USB chargers, connectors, and cables are born equal. You’ve probably noticed that some wall chargers are stronger than others. Sometimes, one USB socket on a laptop is seemingly more powerful than the other. On some desktop PCs, even when they’re turned off, you can charge your smartphone via a USB socket. It turns out there’s a method to all this madness — but first we have to explain how USB power actually works.

New specifications

Many different smartphone chargers... BEGONE!There are now four USB specifications — USB 1.0, 2.0, 3.0, and 3.1 — in addition to the new USB-C connector. We’ll point out where they significantly differ, but for the most part, we’ll focus on USB 3.0, as it’s the most common. The other important fact is that in any USB network, there is one host and one device. In almost every case, your PC is the host, and your smartphone, tablet, or camera is the device. Power always flows from the host to the device, but data can flow in both directions.
Okay, now the numbers. A USB socket has four pins, and and a USB cable has four wires. The inside pins carry data (D+ and D-), and the outside pins provide a 5-volt power supply. In terms of actual current (milliamps or mA), there are three kinds of USB port dictated by the current specs: a standard downstream port, a charging downstream port, and a dedicated charging port. The first two can be found on your computer (and should be labeled as such), and the third kind applies to “dumb” wall chargers.
In the USB 1.0 and 2.0 specs, a standard downstream port is capable of delivering up to 500mA (0.5A); in USB 3.0, it moves up to 900mA (0.9A). The charging downstream and dedicated charging ports provide up to 1500mA (1.5A). USB 3.1 bumps throughput to 10Gbps in what is called SuperSpeed+ mode, bringing it roughly equivalent with first-generation Thunderbolt. It also supports power draw of 1.5A and 3A over the 5V bus.
USB-C is a different connector entirely. First, it’s universal; you can put it in either way and it will work, unlike with USB. It’s also capable of twice the theoretical throughput of USB 3.0, and can output more power. Apple is joining USB-C with USB 3.1 on its new MacBook, and so is Google with the new Chromebook Pixel. But there can also be older-style USB ports that support the 3.1 standard.
The USB spec also allows for a “sleep-and-charge” port, which is where the USB ports on a powered-down computer remain active. You may have noticed this on your desktop PC, where there’s always some power flowing through the motherboard, but some laptops are also capable of sleep-and-charge.
Now, this is what the spec dictates. But in actual fact there are plenty of USB chargers that break these specs — mostly of the wall-wart variety. Apple’s iPad charger, for example, provides 2.1A at 5V; Amazon’s Kindle Fire charger outputs 1.8; and car chargers can output anything from 1A to 2.1A.

Can I blow up my USB device?

iPad USB chargerThere is a huge variance, then, between normal USB ports rated at 500mA and dedicated charging ports which range all the way up to 3,000mA. This leads to a rather important question: If you take a smartphone which came with a 900mA wall charger, and plug it into a 2,100mA iPad charger, as an example, will it blow up?
In short, no: You can plug any USB device into any USB cable and into any USB port, and nothing will blow up — and in fact, using a more powerful charger should speed up battery charging.
The longer answer is that the age of your device plays an important role, dictating both how fast it can be charged, and whether it can be charged using a wall charger at all. Way back in 2007, the USB Implementers Forum released the Battery Charging Specification, which standardized faster ways of charging USB devices, either by pumping more amps through your PC’s USB ports, or by using a wall charger. Shortly thereafter, USB devices that implemented this spec started to arrive.
If you have a modern USB device — really, almost any smartphone, tablet, or camera — you should be able to plug into a high-amperage USB port and enjoy faster charging. If you have an older device, however, it probably won’t work with USB ports that employ the Battery Charging Specification. It might only work with old school, original (500mA) USB 1.0 and 2.0 PC ports. In some (much older) cases, USB devices can only be charged by computers with specific drivers installed.
There are a few other things to be aware of. While PCs can have two kinds of USB port — standard downstream or charging downstream — OEMs rarely seem to label them as such. As a result, you might have a device that charges from one port on your laptop, but not from the other. This might be a trait of older computers, as there doesn’t seem to be a reason why standard downstream ports would be used, when high-amperage charging ports are available. In a similar vein, some external devices — hard drives and optical drives, most notably — require more power than a USB port can provide, which is why they include a two-USB-port Y-cable, or an external AC power adapter.
Otherwise, USB has certainly made charging our gadgets and peripherals much easier than it ever has been. And if the new USB-C connector catches on — and it looks like it will — things will get even simpler, because you’ll never again have to curse after plugging it in the wrong way.

Friday 20 March 2015

Nintendo’s new plan for mobile — and what it means for the company’s consoles

Nintendo mobile
Yesterday, Nintendo dropped a pair of bombshells on the gaming world. First, the company announced that it had partnered with Japanese mobile game development company, DeNA (pronounced DNA), and would bring its major franchises — all of them — to mobile gaming. Second, it has begun work on a next-generation console, codenamed the Nintendo “NX.”

Both of these announcements are huge shifts for the Japanese company, even if it took pains to emphasize that Nintendo remains committed to its first party franchises and its own game development efforts.
Nintendo's partnership with DeNA
                                                                             Nintendo’s partnership with DeNA
Partnering with an established company like DeNA theoretically gets Nintendo the best of both worlds. It’s only barely dipped its toes into free-to-play content, while DeNA has shipped a number of games using that formula. It has no experience in developing franchised titles for smartphones or tablets, whereas DeNA has plenty. But partnering with a third-party gives Nintendo another potential advantage — it’ll let the company effectively field test new gaming concepts and paradigms on hardware that’s at least as powerful as its own shipping systems.

Revisiting the “console quality” graphics question

One of the more annoying trends in mobile gaming the last few years has been the tendency of hardware companies to trumpet “console quality graphics” as a selling point of mobile hardware. Multiple manufacturers have done this, but head-to-head match-ups tend to shed harsh light on mobile promises.
When it comes to Nintendo’s various consoles, however, the various mobile chips would be on much better turf. First off, there’s the Nintendo 3DS. Even the “New” 3DS is a quad-core ARMv11 architecture clocked at 268MHz with 256MB of FCRAM, 6MB of VRAM with 4MB of additional memory within the SoC, and an embedded GPU from 2005, the PICA200. Any modern smartphone SoC can absolutely slaughter that feature set, both in terms of raw GPU horsepower and supported APIs.
What about the Wii U? Here, things get a little trickier. The Wii U is built on an older process node, but its hardware is a little stranger than we used to think. The CPU is a triple-core IBM 750CL with some modifications to the cache subsystem to improve SMP, and an unusual arrangement with 2MB of L2 on one core and 512K on the other two. The GPU, meanwhile, has been identified as being derived from AMD’s HD 4000 family, but it’s not identical to anything AMD ever shipped on the PC side of the business.
blockdia_wiiu_full
                                                                    The Wii U’s structure, with triple-core CPU
By next year, the 16nm-to-14nm SoCs we see should be more than capable of matching the Wii U’s CPU performance, at least in tablet or set-top form factors. If I had to bet on a GPU that could match the HD 4000-era core in the Wii U, I’d put money on Nvidia’s Tegra X1, with 256 GPU cores, 16 TMUs, and 16 ROPS, plus support for LPDDR4. It should match whatever the Wii U had, and by 2016, we should see more cores capable of matching it.
Tegra X1
Nintendo isn’t going to want to trade off perceived quality for short-term profit. The company has always been extremely protective of its franchises — ensuring mobile devices (at least on the high end) are capable of maintaining Nintendo-level quality will be key to the overall effort. At the same time, adapting those franchises to tablets and smartphones gives Nintendo a hands-on look at what kinds of games people want to play, and the ways they use modern hardware to play them.

What impact will this have on Nintendo’s business?

Make no mistake: I think Nintendo wants to remain in the dedicated console business, and the “NX” next-generation console tease from yesterday supports that. Waiting several years to jump into the mobile market meant that mobile SoCs had that much more time to mature and improve, and offer something closer to the experience Nintendo prizes for its titles.
The question of whether Nintendo can balance these two equations, however, is very much open for discussion. Compared with the Wii, the Wii U has been a disaster. As this chart from VGChartz shows, aligned by month, the Wii had sold 38 million more consoles than the Wii U has. The chasm between the Wii and Wii U is larger than the number of Xbox One’s and PS4’s sold combined.
WiivsWiiU
                                                                                               Image by VGChartz
Without more information, it’s difficult to predict what the Nintendo NX might look like. Nintendo could have a console ready to roll in 18-24 months, which would be well within the expected lifetimes of the PlayStation 4 and Xbox One — or, of course, it could double-down on handheld gaming and build a successor to the 3DS. Either way, the next-generation system will be significantly more powerful than anything Nintendo is currently shipping.
Pushing into mobile now gives Nintendo a way to leverage hardware more powerful than its own, and some additional freedom to experiment with game design on touch-screen hardware. But it could also signal a sea change in development focus. If the F2P model takes off and begins generating most of the company’s revenue, it’ll undoubtedly change how its handheld and console games are built — and possibly in ways that the existing player base won’t like.
Balancing this is going to be a difficult achievement for the company — a bunch of poorly designed F2P games might still make short-term capital, but could ruin Nintendo’s reputation as a careful guardian of its own franchises. Failing to exploit the mechanics of the F2P market, on the other hand, could rob the company of capital it needs to transition to its new console.

Tuesday 17 March 2015

Don’t edit the human germ line? Why not?

GeneClip

Victor Hugo once observed, “there is nothing more powerful than an idea whose time has come.” Not long ago, the world asked whether it can have read privileges to view its own genetic file of life. The answer, wrested from regulating bodies and crusty institutions by the expanding clientele of companies like 23andMe, was a resounding yes. Rapid advances in the ability to make edits under this file system have now forced the hand of researchers around the world into penning a moratorium, a temporary ban on germ-line gene editing. Once again, the world asks, if not now, then when?

The answer no longer comes exclusively from funding bodies picking winners and losers, or from journals holding sway over any knowledge they can sequester and trickle out as they see fit. The question is simply too rich. We need look no further than Nature magazine to see that the tables have turned. The first reference in their widely read commentary on the issue is not to an article in another peer-reviewed journal, but rather an article from the people, an article in the popular science publication MIT Tech Review.
The article notes that while some countries have responded to the argument over who can do what to whose genome, and at which positions with an indefinite ban, other countries will simply do. In fact they have already done, in monkeys, and in human embryos beset with genetic predispositions for ovarian or breast cancer. The gene editing techniques that can now be used to police our entire genome, potentially in any cell of the body, can also hit you right in the family jewels — the germ cells. The techniques have names like zinc finger nucleases or TALENs, but the one that has caused the biggest stir is called CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats).
The primary reason for much of the commotion is that CRISPR isn’t all that hard. It’s RNA tag can target specific DNA sequences with decent accuracy, and its on board protein nuclease can cut out the offending region and prepare the wound for repair systems in the cell to act. The main problems right now are that it doesn’t always do its job, it doesn’t always do it only where its supposed to, and it takes finite time to do it. If used not just in static cells, or germ cells in waiting, but in the rapidly dividing cells of say, a developing embryo, then all bets are off. It can still work, but if it catches a dividing cell in the act, when its pants are down so to speak, there is much less predictability.
What is a bit curious, disturbing actually, is that amid all the fuss over editing a little part of one protein in the singular nuclear genome of a cell, places like Britain are in the process of bankrolling related, but much more reckless procedures under the guise of fertility — namely, the mitochondrial transfer procedures that generate what is essentially a three parent embryo. Against the normal background that is our potluck of genetic recombination, many people who potentially stand to benefit from things like CRISPR are asking, what exactly is the problem?
While it is illegal in Britain to modify even a single base pair in human gametes (eggs or sperm), as could conceivably be done in the creation of an IVF embryo, you might now knock yourself out restocking your egg with what ever mitochondria you want. Never mind that empowering the egg in this way potentially introduces 16.5 giga base pairs of new DNA, (as compared with 3.4 giga base pairs nuclear DNA), albeit with ample redundancy.
CRISPR
To better understand some of the issues involved in this kind of germ modification, I would suggest availing yourself of the two articles linked in the next sentence. They highlight some concerns with mitochondrial mutations, heteroplasmy (different brands of mitochondria in the same cell or organism), and potential pitfalls in the elective crafting ofartisanal mitochondrial children. At the center of this issue is a new technique being made available by a company called OvaScience. Their ‘Augment’ procedure takes mitochondria not from a stranger’s egg, or even from somatic cells of the husband, but from supporting cells right next to the egg within the mother’s own defaulting ovaries.
It remains to be seen whether the mitochondrial DNA from these cells is of sufficiently better quality then that in the neighboring eggs. In particular, whether these cells are privy to the selective genetic bottlenecks that the egg is subjected to in vetting its mitochondrial suitors, or whether it is this very bottleneck that is the root cause issue. The founders of the company have made some intriguing discoveries regarding these cells, not least dispelling the myth that a woman is born with all the eggs she will ever have. In mentioning new work at OvaScience (and other places) what the Tech Review article, like many others, misses is that the ability to edit mitochondrial genomes as we would the nuclear is now coming into full view.
Instead of talking about ongoing work at places like OvaScience to do things analogous to CRISPR in stem cells — cells which could be turned into eggs (and might begin to skirt some issues that fall under the rubric of ‘germ cell law’) — we should probably be talking about editing single points in mitochondria. Especially if we have already green-lighted editing the entire mitochondria all at once through complete transfer. One researcher now looking at these issues is Juan Carlos Izpisua Belmonte from the Salk Institute in California. He is evaluating gene-editing techniques to modify the mitochondria in unfertilized eggs to later be used in IVF. If successful, we will soon have concerns even more immediate then CRISPR in germ cells.
At the heart of the issue is the fact that the proteins that make up the respiratory chain that powers our cells are mosaics. In other words, as researcher Nick Lane would say, mitochondria are mosaics. They are built from two genomes, their own DNA and the nuclear DNA, which re-apportions proteins (many once upon a time their own) back to them. Getting this mix right is the premier issue in fertility and any subsequent development of the organism. When negative mutations occur in the subunits making up these respiratory proteins something predictable happens: They don’t fit so close anymore, and subsequently the electrons that need to be transported through them have a more difficult time tunneling through the reaction centers attempting to squeeze out every last drop of energy.
Mr. Lane passes down another quote to us in his forthcoming new book ‘The Vital Question,’ a book which makes much of this discussion a whole lot clearer. It comes from famous biophysicist Albert Szent-Györgyi, and it is a fitting conclusion to our remarks here on tinkering with the file system of life: “Life is nothing but an electron looking for a place to rest.”

Saturday 14 March 2015

New nodes and upstart companies fuel a dynamic market

2015 is likely to be a particularly critical year on multiple fronts, for multiple semiconductor companies. There’s a great deal of potential disruption and meaningful product improvements coming down the pipe. Let’s take this time to look at the big semiconductor trends we expect to see over the next 12 months.

The glorious mobile mess

The mobile market has changed a great deal in the past 12 months, and the shifts aren’t over. From inside the United States, it’s been largely business-as-usual in the smartphone and tablet business. Worldwide, it’s a different story. New players are rising to the forefront, established companies are snapping up market share, and the end result is a reshuffling of vendor share — and vendor profits.
Marketshare-Smartphone
Data courtesy of IDC
Samsung has lost more than a third of its market share, while Apple has given up almost 10% of its own. Newcomer Xiaomi has five times the market share it did in 2012, while Lenovo and LG have picked up substantially in the same period. Most of Samsung’s losses, however, have been to a broad swath of vendors in new markets, and this same trend is clear across the industry. The “Others” column grew substantially in 2014. Samsung has responded by trimming its product divisions by up to 30%.
Intel grew its share of the tablet market significantly, and may have even taken a larger share of the revenue pie from Qualcomm in the back half of the year — though this doesn’t include the fact that Intel is selling devices at a loss. Overall growth in tablets has slowed substantially, however, thanks to the shift towards larger “phablet” devices and thanks to a slower refresh cycle.
Qualcomm's 2015 roadmap
Even as the worldwide markets shift, new technology is coming online. Qualcomm will launch its first 20nm Snapdragon hardware in the next few weeks, the Snapdragon 810 (early reports claimed it was used in the Samsung Galaxy Note 4, but that’s now been ruled out by Samsung itself). The Snapdragon 810 combines a quad-core Cortex-A57 and Cortex-A53 cluster on 20nm technology with an associated 20nm modem that’s capable of multiple advanced LTE features. Power consumption and performance should both improve, and the Adreno 430 GPU is expected to be significantly more powerful than the Snapdragon 805’s Adreno 420. We’ll also see the wide debut and availability of Android 5.0 “Lollipop” next year, and the associated performance benefits. Qualcomm is still working on a successor to Krait, based on its own architecture, but with an unknown launch date.
Mobile phones and tablets should jump ahead technologically, but with companies like MediaTek, RockChip, and Allwinner gunning for design wins of their own, we could see further competition and sharp price drops throughout the year. Intel’s decision to back a budget chip design at TSMC and to partner with a company like Rockchip may end up being a very smart move.
Later in the year we should see 14nm hardware as well, though that’s expected to mostly be a Samsung shift, and there are rumors swirling about whether or not the company is having yield problems. Apple is currently the only company widely expected to make the jump to 14nm in 2015 and it’s entirely possible that they’ve locked up the first production runs from both Samsung and possibly GlobalFoundries. TSMC is expected to follow Samsung at the 16nm node, but not until the tail end of the year.