Top-tier PC manufacturer Lenovo is pushing into the world of virtual reality with a spate of VR-ready desktops certified by Oculus Rift. Lenovo says its latest entry, the sub-$2,000 ThinkCentre 910t introduced in late June, is the world’s first Oculus-certified commercial desktop.
Will this be a new era for high-performance desktop PCs? The demand could be there. M&A consulting firm Digi-Capital predicts the virtual reality and augmented reality (VR/AR) market will grow from about $3.9 billion last year to $108 billion in 2021.
Assuming that’s right, now may be the perfect time to help your customers gain traction in VR. It’s a market with high margins and room to grow.
Oculus Rift, the VR headset designer now owned by Facebook, got a head start in the marketplace. As a result, their name carries a lot of weight when discussing VR tech. Maybe that’s why more PC manufacturers are now seeking official Oculus Rift certification for their VR-ready desktops.
So far, most of the names on the Oculus-ready list are gaming platforms, such as Alienware, Falcon Northwest and Cyberpower. But hardcore Deus Ex machines are not the only game in town.
Now Lenovo seems intent on garnering some VR street-cred, too. That could put the supplier ahead of its usual competitors, including Dell and HP, as VR takes hold in the minds of the masses.
Lenovo’s new ThinkCentre 910t joins 3 other of the company’s models on the Oculus list. They are the Y700, Y900 and the futuristic-looking IdeaCentre Y720-Cube.
Lenovo says its ThinkCentre 910t is the first Oculus-certified commercial desktop
Lenovo knows that VR is no longer just for gaming. As the technology gets more advanced, elements of both VR and AR will find their way into such standard business-computing tasks as multimedia presentations, product development, training and customer support.
Real price of going virtual
All that said, VR-ready PCs are still expensive, and the reason why is simple: To guarantee a good VR experience, you have to buy your way into superior performance.
There is simply no way around the fact that you must achieve a rate of more than 60 frames per second (FPS). Without it, the user will end up with choppy visuals sputtering less than an inch from his or her eyes. The inevitable result will be headaches and nausea.
To achieve the desired result, a VR-ready PC should include a high-end GPU like the Nvidia GeForce GTX 980 or AMD Radeon RX 480 (a dual-GPU system is preferred). And to avoid bottlenecks, you’ll need to support that GPU array with a powerful processor and fast RAM.
A solid VR rig requires fast storage, too. A modern SSD with plenty of space is the best way to go. What all that adds up to a desktop box starting around the $2,000 mark and going up — way up! — from there.
It’s telling that the Lenovo ThinkCentre M910t is actually available online for $835. But Lenovo’s press release puts the price at $1,829. That’s based on the specs it needed to pass Oculus’ 368 hours of testing.
This is an important lesson. Don’t let your VR customers buy into an under-equipped PC. Professional-strength VR carries a professional-strength price tag. If a price sounds too good to be true, it usually is.
And don’t miss:
After years of development, Intel has managed to shrink the humble PC to fit in a pocket. The company calls it the Compute Card.
It’s a logical follow-on to Intel’s mini-PC platform, a small desktop computer called the NUC (short for Next Unit of Computing). Introduced in 2012, the NUC has shrunk over the years, becoming smaller and smaller. The last version was just 4 inches squared.
The Compute Card’s form factor is even smaller: 95 x 55 x 5 mm (approximately 3.7 x 2.2 x 0.2 inches). As Intel says, that’s only slightly larger than a credit card.
But the Compute Card is not just small for small’s sake. Instead, it’s designed to offer modular, plug-and-play solutions for real-world applications that include POS (point of sale), mobile all-in-one (AIO) systems and smart whiteboards.
Intel envisions a world in which users can quickly and easily plug Compute Cards into compatible screens and docking systems (see photo below). This will let users quickly and efficiently update both internal and off-site systems, and all without costly, time-consuming hardware updates.
Picture, for instance, a franchise such as Dunkin’ Donuts. With hundreds of locations around the country, and with multiple digital signs and POS systems in each store, a major systems upgrade could easily turn into a major expense.
But what if Dunkin’ could instead simply send a set of tiny computers to each location? Then franchise owners could slip the new cards into each system and send back the old ones to be reprogrammed and repurposed. This kind of solution could save organizations millions of dollars over the long run.
Under the hood
How much power do you get in a Compute Card? Well, how much do you need? At the recent Computex 2017 show in Taiwan, Intel announced four Compute Card models scheduled to begin shipping in August. These cards, powered by Celeron and Pentium processors, are designed for tasks that require flexibility and low per-unit cost.
Each Compute Card includes 4GB of DDR3 memory and 64GB of eMMC (embedded Multi-Media Controller) storage, which is sort of a cross between a memory card and an SSD (solid state drive).
While that’s a generous amount of storage for a computer this small, it may not be fast enough for resource-intensive tasks such as audiovisual production. So if your customers need more power — say, to replace a NUC or other mini-PC — they may want to consider an alternative: hgher-end Compute Card models with either Core i5 vPro or Core i3 processors.
Both include the same 4GB of DDR3 RAM as the lower-end models, but they have a much more robust storage solution: a 128GB Intel SSD with plenty of throughput to keep up with those faster processors. These models could be perfect for video conferencing, mobile computing and multimedia production.
All of Intel’s Compute Card models include 802.11ac wireless cards. That should offer enough up/down speed to satisfy users who rely on cloud storage for file management. The Compute Cards also come standard with Bluetooth 4.2 for wireless connectivity with keyboards, mice, cameras and audio speakers.
Part of Intel’s job has always been to look into the future and anticipate the needs of enterprise and SMB customers who are hungry for powerful, cost-effective, and easy to deploy solutions. Did Intel hit the mark this time? Time, and your customers, will tell.
And don’t miss:
Are you ready for the truly connected home? In just a few years, the average smart home will contain as many as 50 connected devices, according to Intel.
These devices will become part of the Internet of Things (IoT). They’ll add to a global network of intelligent, connected devices that is changing the way we live and work. It’s also a potential business opportunity for smart solution providers.
The key to the next generation of successful smart-home integrations will be local networks. These networks will need to deliver consistent gigabit-broadband connectivity to a wide variety of home devices. That means new networking hardware.
For this reason, Intel is arming ecosystem providers with its Intel Home Wi-Fi Development Kit. Intel’s new standard is currently being used by OEMs and providers, including ASUS, Deutsche Telekom and Netgear.
They’re using the kit to create a selection of Intel-based routers and gateways. The larger goal: pace the growing demand for faster speeds and more-connected devices.
This new batch of networking hardware will act as the central hub in connected homes. Intel says its unique Wi-Fi kit will let these hubs connect as many as 128 clients simultaneously. The hubs will also maintain consistent, high-speed connectivity, even as connected devices — and their demands for bandwidth — come and go. That’s no small feat!
Smarter homes tomorrow
Tomorrow’s connected home will offer a long list of powerful capabilities. Universal control of entertainment systems is just the beginning. You can also expect to see a totally integrated network. Appliances, security systems, climate-control components and other nearby digital devices will all provide and share information.
Tomorrow’s home network will also track users from room to room, turning on lights, adjusting the temperature, and announcing the arrival of guests it recognizes via biometric identification. It will know where you’re going, warn you to leave early if there is traffic on your route, and keep a digital eye on the house while you’re gone. It might even remind you to pick up milk and eggs on the way home!
Networked smart-home devices will not only collect information, but also learn to help manage events.
(Image courtesy of Intel)
Watching one of Intel’s smartly produced connected home videos can be an eye-opening experience. Not just because of how the smart home functions, but also because of the way it’s learning. The connected home will understand more than just what users do. It will also attempt to analyze why.
Over time, this kind of intelligence should be able to anticipate user’s needs. This, in turn, could save time and money, strengthen security, and assist both elderly and disabled users.
This cutting-edge home technology is within our grasp. But it won’t fulfill its promise unless we can supply a disparate array of smart devices. These devices will need high-speed connectivity to the servers that power their intelligence.
That’s where you and your customers come in. The future is emerging as we speak. It’s time to get ready.
You may also enjoy:
FPGA technology is having a renaissance. New designs like the recently released Intel Stratix 10 offer better performance, 70 percent lower power consumption, up to 1 TB/sec. of memory bandwidth, and up to 10 tera floating point operations per second (FLOPS) of single-precision floating-point performance. It even has its own onboard quad-core ARM processor.
What does FPGA do?
FPGA stands for field-programmable gate array. What makes this technology increasingly relevant in today’s market is its ability to be programmed — and re-programmed — after being deployed in the field. Applications include data centers, the Internet of Things (IoT), high performance computing (HPC), autonomous vehicles and even machine learning.
Here’s an example, courtesy of the FPGA For Dummies eBook. It involves a car’s rear-view camera. Imagine that a manufacturer has put onto the market a car with a rear-view camera that delivers its image to the dashboard screen in 250 milliseconds. Now imagine that the government has changed its regulation to require a maximum delay of only 100 milliseconds. What's the automaker to do?
If the company used a standard processor to control its rear-view camera, it would need to either upgrade or replace the camera. But if it used a connected FPGA, the company could simply send the FPGA new instructions anytime, anywhere. In fact, the FPGA could run a brand-new command set every time it starts up. New instructions could potentially be delivered to the FPGA-enabled camera hardware through a wireless update. This could save millions of dollars in recall costs, while also increasing customer safety and satisfaction.
The flexibility offered by an FPGA isn’t limited to automakers, of course. The technology can also benefit servers, IoT networks and more.
Servers and HPCs can work faster and more efficiently by offloading software-based algorithms and analytics onto FPGA hardware. This can be especially useful for large server arrays tasked with complex video streaming, such as those deployed by Netflix and Amazon. FPGA-equipped servers, when combined with high-performance Xeon processors, can achieve extremely low latency and very precise frame-level synchronization.
FPGA’s bright future
FPGA is one of those older technologies that has made a comeback, similar to the way computer timesharing has been reborn as the cloud. The first FPGAs date back to the 1980s. But those early devices offered only a fraction of the power and functionality available now, and they soon fell out of favor. It was FPGA market leader Altera (acquired by Intel in late 2015) that made the technology relevant again.
In the ensuing years, FPGA architecture has certainly come a long way. In fact, there are now so many possible implementations, engineers are still just scratching the technology’s surface. We can only guess what the future of this technology will offer. But one thing is sure: With a platform this flexible, the possibilities are nearly limitless.
You may also enjoy:
The world of wearable technology is growing rapidly. Designers and OEMs are turning the common into the exceptional, with help from small and smart microcomputers such as Intel’s Curie module. Watches, monitors, clothes and glasses can now keep track of our stats and habits, pushing untold exabytes through the Internet of Things (IoT).
The result is an increasingly connected world. Indeed, the adoption of wearables has risen quickly. In 2014 only about 1 in 5 U.S. residents had a wearable; by last year, that had grown to nearly half, according to PwC.
As the PwC illustration below illustrates, the most common type of wearable continues to be the ubiquitous fitness band. But the adoption of smart clothes, glasses and multimedia devices is also on the rise, thanks to increased functionality, lower prices and longer battery lives.
Exciting Wearables for Healthcare
Some of the most noteworthy advances in wearable tech are those designed to improve our health. Pedometers and heart-rate monitors such as the popular Fitbit were just the beginning. The next generation of smart wearables will be able to track far more vital data than how many steps we take in a day.
The K’Track Glucose (pictured below) is a great example. On the outside it looks like a common smart watch. But the K’Track could revolutionize diabetes management by making continuous glucose monitoring easier, less expensive and less painful. This device is now being certified for medical use, and shipments should start in 2018. The expected retail price is just $150.
The K’Track Glucose promises to transform diabetes management.
Another example is QardioCore, a wearable device that measures continuous electrocardiogram (ECG) data, heart rate, respiratory rate and other related activity. Its supplier calls the device “the world’s first wearable ECG.” For people with heart conditions, the QardioCore promises to deliver a whole new level of mobility and security. It will sync to the user’s smartphone to provide an overview, then automatically send that data to their physician. (The K’Track Glucose will do this, too.) The QardioCore retails for $450, is available now for pre-order, and is set to ship this August.
Tiny Under the Hood
Intel’s major processor releases tend to get all the press, but you won’t find a Kaby Lake quad-core chip inside the newest wearables. Instead, the chip powering many of the next generation is tiny, just a fraction of the size of a standard processor. Intel calls it the Curie module, and it’s a low-power chip the size of a button, powered by a battery just as small. Here’s a look, along with a standard pencil for scale:
Intel’s Curie module offers a tiny, low-power solution for wearables.
Despite its diminutive stature, Curie sports a 6-axis sensor designed to connect with the latest accelerometer, gyroscope and Bluetooth tech. The included on-die flash memory amounts to 384KB plus a tiny 80KB of SRAM. That might sound like the specs of your first DOS workstation. But it’s actually enough to add game-changing functionality to such common items as clothes, glasses and helmets.
Intel’s Curie module is already finding its way into smart products from Tag Heur, Oakley and New Balance, with more on the way. The wearables market looks set for serious growth. Are you and your customers ready?
Also check out:
Intel’s latest memory innovation, called Optane, is now available to enterprise customers running large server arrays. Smaller form factors designed for laptops and PCs will be available in the second half, Intel says.
With Optane, Intel has simultaneously addressed 3 important memory concepts: speed, density and non-volatility. That’s a rarity in the current tech climate, where Moore’s Law is lagging behind schedule. These days, noteworthy innovations too often come in the form of minor speed bumps and — yawn — miniaturized form factors.
Existing memory technology has always fallen somewhat short of the mark. While DRAM is fast, it lacks the density to hold larger files. Also, it doesn’t score any points for non-volatility; DRAM’s memory disappears when the power is cut off. By contrast, NAND memory (found in solid state drives) offers great improvements over conventional storage; it holds large amounts of data and does so either with or without a direct power supply. But the speed of NAND doesn’t come even close to that of DRAM.
Using a new architecture that Intel calls 3D XPoint, Optane checks all 3 boxes — speed, density and non-volatility — making it a big step forward in storage technology. The module sits between the drive and the main memory, keeping large files and apps ready for lightning-fast deployment. The promised results are much faster load times, increased processing performance and smoother operation.
Intel’s Optane solid state drives offer speed, density and non-volatility.
7th Gen or Bust
But there’s a catch. (Isn’t there always?) Novel technology often requires cutting-edge connectivity, and such is the case with Optane. It currently works only on 7th Gen Intel “Kaby Lake” series chipsets equipped with modified M.2 connectors.
If your customers are in the market for new Optane-equipped laptops, you’ll need to show them Kaby Lake-equipped options such as Lenovo’s ThinkPad T570, which starts at around $900. Lenovo says the device will soon be available with an optional 16GB Optane PCIe M.2 card. Choosing that option should significantly increase performance. But watch out: It will likely also significantly increase the price.
If your customers’ situation calls for a more traditional form factor, Dell will soon offer Optane-compatible OptiPlex desktop computers featuring 7th Gen Intel “Kaby Lake” processors and chipsets. Certain Intel NUC mini-PCs will offer Optane compatibility, too, putting big performance gains in a small package.
New Life for Old Drives
Optane is designed to provide high-speed access to large data sets such as applications and operating-system images. Conventional hard drives do this, too, but can only serve the data at a fraction of the speed. When used together, Optane memory and conventional drives can be a potent combination.
No, this is not an entirely new concept. Hybrid drives — conventional drives equipped with small solid-state modules — have been around for years. But placing an Optane memory module inline with a conventional hard drive redefines the concept, adding a great deal of speed and efficiency. That means an Optane upgrade could save your customers the expense of replacing aging conventional hard drives in the near term.
Xerox wants to change the way your clients interact with printers. Now they’ll print with customized workflows, apps and cloud connectivity.
That's one of the big ideas behind Xerox’s launch last week of no fewer than 29 new multifunction printers (MFPs). Xerox says its new printer designs, dubbed VersaLink and AltaLink, will act as what it calls “smart, connected workplace assistants.”
The printers’ new interface design, explains Xerox CTO Steve Hoover, is based on a study of human behavior known as ethnography. And because smartphone and tablet interfaces are so ubiquitous, Xerox researchers believed it made sense to employ a similar user experience (UX) on the new printers.
The result? Xerox has embedded what’s essentially a tablet and new software platform into its printers. The new touchscreen interfaces — complete with app icons, workflow shortcuts and contextual menus — should feel familiar to nearly every smartphone and tablet user.
Xerox hopes its new printer touchscreens will appeal to smartphone & tablet users.
But building a touchscreen into a printer is not a totally new concept. Even some sub-$100 home printers — for example, the $80 HP Envy 5540 — include a graphical user interface (GUI) presented on a small color touchscreen.
So is Xerox doing something truly new here? To answer this question, you need to look at not just the human-interface elements, but also the complete system.
In fact, Xerox is betting its new printers will succeed with a combination of technologies. These have been designed to work together to save money and time, and reduce frustration.
For example, a user of the new Xerox printers could trigger a preprogrammed workflow to instantly scan a multipage document, print it, and then collate copies for local workers. The printer could also upload a copy of the document to a cloud service such as Dropbox or Google Drive, and then email a copy to a preselected group of mobile workers. It could even send a text message to everyone involved, letting them know that a new document is available. And all this with the touch of a single button.
While none of these tasks are particularly difficult by themselves, taken together they can be time-consuming, to say the least. This consumption of time can grow quickly, even exponentially, in offices where many documents must be scanned, printed and distributed every day.
The Almighty Platform
There’s another big idea behind the Xerox announcements, and it’s one that’s extremely ambitious. Xerox is bidding for not only big market share in the crowded MFP market, but also a power position in the new, connected office.
By opening the door for user-definable workflows and app development, Xerox could gain some of the muscle enjoyed by Apple and Google, suppliers of the world’s 2 most popular mobile-device platforms.
And if a market opens up for third-party apps — for instance, those that connect a Xerox VersaLink MFP to other office solutions such as video-conferencing hardware and inventory control systems — then Xerox could very well find itself at the center of the modern office.
Digital security and surveillance (DSS) tech is getting smarter. Using the latest advances in artificial intelligence, microprocessing and the Internet of Things, it’s changing the way we interact with our digital hardware.
If your customers are interested in video surveillance and edge-based analytics, now could be the right time to show them the latest in smart DSS.
More Affordable Tech
It’s now possible to shrink the size of a smart DSS system from a roomful to a handful, thanks to powerful yet energy-efficient processors combined with high-speed storage such as Intel’s latest solid state drive (SSD) technology. Your customers will notice a dramatic drop in cost, too, opening new market applications for SMBs.
What used to require server arrays and a small army of IT professionals can now be handled by a well-equipped mini-PC such as the Intel NUC (pictured below, with headphones). Users can connect network-attached cameras to a PC either on- or off-premises. And new software from producers such as Sighthound, a company that specializes in AI for visual systems, can identify and categorize subjects from multiple video feeds.
DSS has some new tricks up its sleeve. Using facial-recognition and AI, video surveillance systems can now prevent a far higher percentage of annoying false positives. This is especially useful for home use-cases and other situations where human intervention is too costly, inconvenient or time-consuming to implement.
To prevent false positives, the latest systems can not only identify objects such as known vehicles, stray animals or waving tree branches, but also learn to ignore them. Instead of issuing a false positive for objects like these, newer DSS systems instead alert users only to the presence of unidentified humans and machines.
But security isn’t the only application for the latest DSS technology. Retailers can use the technology to identify shopper traffic patterns in stores, helping them better place products and promotional signage. Similarly, civil engineers and safety personnel can use DSS to ease traffic congestion, make parking more efficient, and identify moving violations, all without the aid of expensive additional staff.
Other use cases will no doubt continue to crop up regularly as DSS technology becomes more widely implemented. We already live in a time where video-aided computers can recognizes vehicles and read license plates, control drones and industrial robots, and identify faces and emotions in real time.
What will this futuristic technology bring us tomorrow? We’re limited only by our imaginations.
Changing its strategy from previous CPU releases, Intel is expected to kick off the 8th Gen rollout by releasing new U-series and Y-series variants destined for laptops, 2-in-1 convertibles and mobile devices. This is a fast-moving market segment hungry for innovation in terms of portable power, lower power consumption and longer battery life.
We can expect to see 8th Gen processors show up in the marketplace in the first quarter of next year. These could include new 6-core Xeon chips destined for data centers and high-performance workstations with multiprocessor arrays. Standard desktop processors should be a part of that release, too, and will carry nomenclature such as Core i7-8000K.
What About 10nm?
Tech watchers have been waiting for Intel 10nm chips for quite some time now. Initial rumors indicated the new, smaller architecture might come as part of Intel’s 8th Gen release, but it was not to be.
Though Intel has not yet officially confirmed a release date, Cannonlake 10nm chips could arrive during the second half of 2018. This is likely to be what Intel calls a fluid release, meaning Cannonlake could be the first series to feature two chip sizes. In this case, the leaner, faster 10nm chips would find their way into enterprise-class hardware as well as high-end mobile devices. The remaining chips would be built on the current 14nm platform.
Intel knows the competition is closing in fast. Qualcomm recently unveiled its Snapdragon 835 chipset. Qualcomm also confirmed that Samsung has started building the new SoC (system on a chip) using its 10nm FinFET process.
For now, 14nm is the best the industry has to offer. The added performance that comes with Intel’s upcoming 8th Gen processors should make a compelling argument for upgrade aging infrastructure. Your clients on the threshold of a big purchase will have no shortage of options to consider.
Check out related content:
The era of 5G mobile data is about to begin.
Some 50 billion Internet of Things (IoT) devices are estimated to be coming online over the next 5 to 10 years. To serve them, faster, more efficient communication will be needed, too.
Key to full-scale IoT are high speeds, low latency and full support for low-bitrate devices and sensors. 5G technology promises all that — and more.
Intel is doing its part. On Jan. 4 the company announced the world’s first global 5G modem. No, the device is not yet available on the open market. But Intel is already supplying top-tier manufacturers with the hardware to test and develop new 5G devices. These include everything from connected appliances and wearables to autonomous vehicles.
At the same time, OEMs are working hard to develop 5G cellular vehicle-to-everything (C-V2X) communications. These OEMs include members of the newly minted 5G Automotive Association (5GAA). It’s a partnership among Audi, BMW, Daimler, Huawei, Intel, Nokia and Qualcomm.
These heavy hitters will surely be able to make significant advances in the burgeoning auto-tech market. Nokia, for example, recently completed the first 5G connection using the 5GTF interface, an industry “pre-standard” spec for 5G apps. And more will likely follow.
But 5G is not only about smart cars and coffee makers. Increased mobile bandwidth and fidelity should have far-reaching implications in many sectors. Mobile workers will bring increased facility to job sites. Healthcare practitioners will be empowered to reach beyond their hospital walls, delivering diagnosis and treatment to remote areas. Students will be able to leave the classroom and move through the world with full-featured augmented reality at their fingertips.
The implications are huge. Venkata “Murthy” Renduchintala, Intel's president of client, IoT and systems architecture, recently commented in an editorial: “Not since the transition from analog to digital have we seen such a transformation of this scale.”
In fact, Renduchintala (pictured above) believes 5G technology will foment a true technological revolution. For example, he posits a mapping drone that pushes 20Gb of data to the cloud every minute. This, in turn, could enable streets full of autonomous vehicles that communicate with drivers and passengers, pedestrians, other cars, crosswalk and traffic signals, and the smart city all around.
It’s a bold prediction — but one seems increasingly realistic as time and technology march forward.
Apps and other software solutions will evolve, too, as more bandwidth becomes available. That could lead to what Intel calls a “virtuous cycle of innovation.” It’s a path from technological advances like 5G to new analytic data from apps and devices, and back around again.
Fasten your seatbelt. 5G is going to be an exciting ride.
See related posts: