Experts from Google, T-Mobile and other tech frontiers weigh in on the future of AI

Experts from Google, T-Mobile and other tech frontiers weigh in on the future of AI

11:30pm, 25th April, 2019
SalesPal CEO Ashvin Naik, Google Cloud’s Chanchal Chatterjee, Audioburst’s Rachel Batish and T-Mobile’s Chip Reno discuss the future of artificial intelligence at the Global AI Conference in Seattle. (GeekWire Photo / Alan Boyle) Artificial intelligence can rev up recommendation engines and make self-driving cars safer. It can even . But what else will it be able to do? At today’s session of the , a panel of techies took a look at the state of AI applications — and glimpsed into their crystal balls to speculate about the future of artificial intelligence. The panelists included Chanchal Chatterjee, AI leader at ; Ashvin Naik, CEO of , which markets AI-enabled sales analysis tools; Rachel Batish, vice president of product for , an audio indexing service; and Chip Reno, senior advanced analytics manager at . The moderator was Shailesh Manjrekar, head of product and solutions marketing for , a multi-cloud data storage and management company. Here are five AI frontiers that came up in today’s conversations, plus a couple of caveats to keep in mind: Smarter grocery stores: AI-enabled grocery shopping was pioneered right here in Seattle at , but the trend is catching on. Today called the Intelligent Retail Lab in Levittown, N.Y. Britain’s takes a different tack: Users fill up a virtual shopping cart, then schedule a one-hour delivery slot. Google Cloud helped Ocado develop the , including a recommendation engine that figures out customers’ shifting preferences, an algorithm that handles and prioritizes customer service emails, and a as Ocado’s previous system. Energy-saving server farms: Chatterjee pointed to how Google used its DeepMind machine learning platform to . Before AI was put on the case, 10 years’ worth of efficiency measures could reduce energy usage by merely 12 percent, he said. Within six months, AI brought about a 40 percent reduction. “That was a huge difference that AI made in a very short amount of time that we could not do with 10 years of research,” Chatterjee said. Financial market prediction: Hedge fund managers and bankers are already , detect market manipulation and assess credit risks. But Chatterjee said the models are getting increasingly sophisticated. AI is being used to predict how margin trades could play out, or whether undervalued financial assets are ripe for the picking. AI models could even anticipate . “When the lock-in period expires … that’s a great time to short,” Chatterjee said. Deeper, wider AI conversations: Chatterjee predicted that our conversations with voice assistants are likely to get wider, deeper and more personal as AI assistants become smarter. Audioburst’s Batish said conversational AI could provide a wider opening for smaller-scale startups and for women in tech. “Women are very much prominent in conversational applications and businesses,” she said. Salespal’s Naik agreed with that view — but he worried about the dearth of compelling applications, based on his own company’s experience with voice-enabled devices like Amazon Echo and Google Home. “They’re gathering dust. … We use them just to listen to music or set up alarms. That’s it,” he said. AI for good, or evil? Chatterjee said AI could be a powerful tool to root out fraud and corruption. AI applications could be built “to see what influence relationships have on outcomes — that tells you if there are any side deals being made,” he said. But Batish worried about the rise of , virtual and . “I’m actually afraid of what that could bring into our world,” she said. “It would be interesting to see how companies are trying to be able to monitor or identify fake situations that are being built out of very complicated AI.” Watch out for job disruption: Many studies have pointed out that automation is likely to disrupt employment sectors, especially in the service, manufacturing and transportation sectors. “Anything that is repetitive, that can be extracted from multiple sources, that doesn’t have a lot of creativity amd innovation, is at risk due to AI,” Chatterjee said. “That means that more people will have to move into other sectors.” Watch out for the hype: “I’d like to see people get away from the hype a little bit,” T-Mobile’s Reno said. “I’m on the client side, so I see all the pitches involving AI and ML or deep learning. … A lot of times, AI is not applicable to certain use cases where we’re applying it. Just good old-fashioned statistics or business intelligence is fine. So I think that the future of AI relies on getting past the hype and getting more into aligning these awesome tools and algorithms to specific business cases.”
Talk all things robotics and AI with TechCrunch writers

Talk all things robotics and AI with TechCrunch writers

2:58pm, 15th April, 2019
This Thursday, we’ll be hosting our third annual at . The day is packed start-to-finish with intimate discussions on the state of robotics and deep learning with key founders, investors, researchers and technologists. The event will dig into recent developments in robotics and AI, which startups and companies are driving the market’s growth and how the evolution of these technologies may ultimately play out. In preparation for our event, TechCrunch’s spent time over the last several months visiting some of the top robotics companies in the country. Brian will be on the ground at the event, alongside , who will also be on the scene. Friday at 11:00 am PT, Brian and Lucas will be sharing with members (on a conference call) what they saw and what excited them most. Tune in to find out about what you might have missed and to ask Brian and Lucas anything else robotics, AI or hardware. And want to attend the event in Berkeley this week? . To listen to this and all future conference calls, become a member of Extra Crunch.
Alcatraz AI is building Face ID for corporate badges

Alcatraz AI is building Face ID for corporate badges

10:57am, 2nd April, 2019
Meet a startup that wants to replace all the bade readers in your office with a Face ID-like camera system. Alcatraz has integrated multiple sensors to identify faces and unlock doors effortlessly. If you think about it, it’s weird that fingerprint sensors took off on mobile but everybody is still using plastic badges for their offices. Sure, high security buildings use fingerprint and iris scanners. But it adds too much friction in too many cases. First, when everybody gets back from their lunch break, it can create a traffic jam if everybody needs to place their finger on a sensor. Second, onboarding new employees would require you to add their biometric information to the system. It can be cumbersome for big companies. promises a faster badging experience with facial authentication. When you join a company, you also get a physical badge. The first few times you use the badge, Alcatraz AI scans your face to create a model for future uses — after a while, you can leave your badge at the office. The company has built custom hardware with three different sensors that include both traditional RGB sensors and infrared sensors for 3D mapping. Customers pay Alcatraz AI to install those hybrid badge/face readers. After that, companies pay an annual fee in order to use the platform. Alcatraz AI customers get analytics, real-time notifications and can detect tailgating. This way, if somebody isn’t supposed to go in the secret lab, Alcatraz AI can detect if they’re trying to sneak in by following someone who is authorized to go in there. The idea is that the on-going license cost should cover what your company was paying for guards. The startup has raised nearly $6 million from Hardware Club, Ray Stata, JCI Ventures, Ruvento Ventures and Hemi Ventures.
Turing Award honors a different kind of AI network with ‘Nobel Prize of computing’

Turing Award honors a different kind of AI network with ‘Nobel Prize of computing’

1:14pm, 28th March, 2019
Facebook’s Yann LeCun, Mila’s Yoshua Bengio and Google’s Geoffrey Hinton share the 2018 Turing Award. (ACM Photos) The three recipients of the Association for Computing Machinery’s 2018 Turing Award, known as the “Nobel Prize of computing,” are sharing the $1 million award for their pioneering work with artificial neural networks — but that’s not all they share. Throughout their careers, the researchers’ career paths and spheres of influence in the field of artificial intelligence have crossed repeatedly. Yann LeCun, vice president and chief AI scientist at Facebook, conducted postdoctoral research under the supervision of Geoffrey Hinton, who is now a vice president and engineering fellow at Google. LeCun also worked at Bell Labs in the early 1990s with Yoshua Bengio, who is now a professor at the University of Montreal, scientific director of Quebec’s Mila AI institute, and an adviser for Microsoft’s AI initiative. All three also participate in the program sponsored by CIFAR, previously known as the Canadian Institute for Advanced Research. In , ACM credited the trio with rekindling the AI community’s interest in deep neural networks — thus laying the groundwork for today’s rapid advances in machine learning. “Artificial intelligence is now one of the fastest-growing areas in all of science, and one of the most-talked-about topics in society,” said ACM President , a professor emeritus of computer science at Oregon State University. “The growth of and interest in AI is due, in no small part, to the recent advances in deep learning for which Bengio, Hinton and LeCun laid the foundation.” And you don’t need to work in a lab to feel their impact. “Anyone who has a smartphone in their pocket can tangibly experience advances in natural language processing and computer vision that were not possible just 10 years ago,” Pancake said. The current approach to machine learning, championed by Hinton starting in the early 1980s, shies away from telling a computer explicitly how to solve a given task, such as object classification. Instead, the software uses an algorithm to analyze the patterns in a data set, and then apply that algorithm to classify new data. Through repeated rounds of learning, the algorithm becomes increasingly accurate. Hinton, LeCun and Bengio focused on developing neural networks to facilitate that learning. Such networks are composed of relatively simple software elements that are interconnected in ways inspired by the connections between neurons in the human brain.
This self-driving AI faced off against a champion racer (kind of)

This self-driving AI faced off against a champion racer (kind of)

2:14pm, 27th March, 2019
Developments in the self-driving car world can sometimes be a bit dry: a million miles without an accident, a 10 percent increase in pedestrian detection range, and so on. But this research has both an interesting idea behind it and a surprisingly hands-on method of testing: pitting the vehicle against a real racing driver on a course. To set expectations here, this isn’t some stunt, it’s actually warranted given the nature of the research, and it’s not like they were trading positions, jockeying for entry lines, and generally rubbing bumpers. They went separately, and the researcher, whom I contacted, politely declined to provide the actual lap times. This is science, people. Please! The question which Nathan Spielberg and his colleagues at Stanford were interested in answering has to do with an autonomous vehicle operating under extreme conditions. The simple fact is that a huge proportion of the miles driven by these systems are at normal speeds, in good conditions. And most obstacle encounters are similarly ordinary. If the worst should happen and a car needs to exceed these ordinary bounds of handling — specifically friction limits — can it be trusted to do so? And how would you build an AI agent that can do so? The researchers’ paper, published today in the journal Science Robotics, begins with the assumption that a physics-based model just isn’t adequate for the job. These are computer models that simulate the car’s motion in terms of weight, speed, road surface, and other conditions. But they are necessarily simplified and their assumptions are of the type to produce increasingly inaccurate results as values exceed ordinary limits. Imagine if such a simulator simplified each wheel to a point or line when during a slide it is highly important which side of the tire is experiencing the most friction. Such detailed simulations are beyond the ability of current hardware to do quickly or accurately enough. But the results of such simulations can be summarized into an input and output, and that data can be fed into a neural network — one that turns out to be remarkably good at taking turns. The simulation provides the basics of how a car of this make and weight should move when it is going at speed X and needs to turn at angle Y — obviously it’s more complicated than that, but you get the idea. It’s fairly basic. The model then consults its training, but is also informed by the real-world results, which may perhaps differ from theory. So the car goes into a turn knowing that, theoretically, it should have to move the wheel this much to the left, then this much more at this point, and so on. But the sensors in the car report that despite this, the car is drifting a bit off the intended line — and this input is taken into account, causing the agent to turn the wheel a bit more, or less, or whatever the case may be. And where does the racing driver come into it, you ask? Well, the researchers needed to compare the car’s performance with a human driver who knows from experience how to control a car at its friction limits, and that’s pretty much the definition of a racer. If your tires aren’t hot, you’re probably going too slow. The team had the racer (a “champion amateur race car driver,” as they put it) drive around the Thunderhill Raceway Park in California, then sent Shelley — their modified, self-driving 2009 TTS — around as well, ten times each. And it wasn’t a relaxing Sunday ramble. As the paper reads: Both the automated vehicle and human participant attempted to complete the course in the minimum amount of time. This consisted of driving at accelerations nearing 0.95g while tracking a minimum time racing trajectory at the the physical limits of tire adhesion. At this combined level of longitudinal and lateral acceleration, the vehicle was able to approach speeds of 95 miles per hour (mph) on portions of the track. Even under these extreme driving conditions, the controller was able to consistently track the racing line with the mean path tracking error below 40 cm everywhere on the track. In other words, while pulling a G and hitting 95, the self-driving Audi was never more than a foot and a half off its ideal racing line. The human driver had much wider variation, but this is by no means considered an error — they were changing the line for their own reasons. “We focused on a segment of the track with a variety of turns that provided the comparison we needed and allowed us to gather more data sets,” wrote Spielberg in an email to TechCrunch. “We have done full lap comparisons and the same trends hold. Shelley has an advantage of consistency while the human drivers have the advantage of changing their line as the car changes, something we are currently implementing.” Shelley showed far lower variation in its times than the racer, but the racer also posted considerably lower times on several laps. The averages for the segments evaluated were about comparable, with a slight edge going to the human. This is pretty impressive considering the simplicity of the self-driving model. It had very little real-world knowledge going into its systems, mostly the results of a simulation giving it an approximate idea of how it ought to be handling moment by moment. And its feedback was very limited — it didn’t have access to all the advanced telemetry that self-driving systems often use to flesh out the scene. The conclusion is that this type of approach, with a relatively simple model controlling the car beyond ordinary handling conditions, is promising. It would need to be tweaked for each surface and setup — obviously a rear-wheel-drive car on a dirt road would be different than front-wheel on tarmac. How best to create and test such models is a matter for future investigation, though the team seemed confident it was a mere engineering challenge. The experiment was undertaken in order to pursue the still-distant goal of self-driving cars being superior to humans on all driving tasks. The results from these early tests are promising, but there’s still a long way to go before an AV can take on a pro head-to-head. But I look forward to the occasion.
Microsoft will be adding AI ethics to its standard checklist for product release

Microsoft will be adding AI ethics to its standard checklist for product release

4:39pm, 25th March, 2019
Harry Shum is Microsoft’s executive vice president for AI and research. (GeekWire Photo) Microsoft will “one day very soon” add an ethics review focusing on artificial-intelligence issues to its standard checklist of audits that precede the release of new products, according to Harry Shum, a top executive leading the company’s AI efforts. AI ethics will join privacy, security and accessibility on the list, Shum in San Francisco. Shum, who is executive vice president of group, said companies involved in AI development “need to engineer responsibility into the very fabric of the technology.” Among the ethical concerns are the potential for AI agents to from the data on which they’re trained, to through deep data analysis, to , or simply to be . Shum noted that as AI becomes better at analyzing emotions, conversations and writings, the technology could open the way to increased propaganda and misinformation, as well as deeper intrusions into personal privacy. In addition to pre-release audits, Microsoft is addressing AI’s ethical concerns by improving its facial recognition tools and adding altered versions of photos in its training databases to show people with a wider variety of skin colors, other physical traits and lighting conditions. Shum and other Microsoft executives have discussed the ethics of AI numerous times before today: Back in 2016, Microsoft CEO Satya Nadella for AI research and development, including the need to guard against algorithmic bias and ensure that humans are accountable for computer-generated actions. In a book titled “The Future Computed,” Shum and Microsoft President Brad Smith , supported by industry guidelines as well as government oversight. They wrote that “a Hippocratic Oath for coders … could make sense.” Shum and Smith , or Aether. Last year, Microsoft Research’s Eric Horvitz said due to the Aether group’s recommendations. In some cases, he said specific limitations have been written into product usage agreements — for example, a ban on facial-recognition applications. Shum told GeekWire almost a year ago that he hoped the Aether group would develop — exactly the kind of pre-release checklist that he mentioned today. Microsoft has been delving into the societal issues raised by AI with other tech industry leaders such as Apple, Amazon, Google and Facebook through a nonprofit group called the . But during his EmTech Digital talk, Shum acknowledged that governments will have to play a role as well. The nonprofit AI Now Foundation, for example, has called for , with special emphasis on applications such as facial recognition and affect recognition. Some researchers have called for creating a who can assist other watchdog agencies with technical issues — perhaps modeled after the National Transportation Safety Board. Others argue that entire classes of AI applications should be outlawed. In an and an , a British medical journal, experts called on the medical community and the tech community to support efforts to ban fully autonomous lethal weapons. The issue is the subject of a this week.
White House starts to flesh out AI research plan — and raises its profile with AI.gov

White House starts to flesh out AI research plan — and raises its profile with AI.gov

1:09pm, 19th March, 2019
White House tech adviser Michael Kratsios addresses scores of executives, experts and officials at a White House summit focusing on artificial intelligence in May 2018. (OSTP via Twitter) For months, the White House has been talking up artificial intelligence as one of America’s most important tech frontiers. Now we’re starting to see some of the dollar signs behind the talk. In newly released budget documents, the Trump administration says it wants to split $850 million in civilian federal spending on AI research and development between the National Science Foundation, the National Institutes of Health, the National Institute of Standards and Technology and the Energy Department. This is in addition to for AI and machine learning, including $208 million for the Joint Artificial Intelligence Center. Based on the agency-by-agency breakdowns, NSF would get the lion’s share of the $850 million — specifically, The Department of Energy says it’s that would “improve the robustness, reliability, and transparency of Big Data and AI technologies, as well as quantification and development of software tools for DOE mission applications.” About $71 million would go to DOE’s Office of Science, and $48 million would go to the National Nuclear Security Administration, which safeguards the nation’s nuclear arsenal. The National Institutes of Health doesn’t lay out exactly how much it’s requesting in its , but it does detail what the money would be used for: “NIH is focused on the promise of artificial intelligence (AI) and machine learning (ML) for catalyzing advances in basic (e.g., image interpretation, neuroscience, genomic variants and disease risk, gene structure, and epigenomics) and clinical research (e.g., robotic surgery, natural language processing of electronic health record data, inferring treatment options for cancer, reading radiology results). NIH recognizes that there are many areas of biomedical research where novel computing, machine intelligence, and deep learning techniques have the potential to advance human health.” NIST hasn’t yet provided details about the funds it’s aiming to devote to AI, but its total R&D budget would be trimmed by 8 percent if the administration’s proposal is accepted. NSF would face a 10 percent cut, and NIH would see its total R&D budget reduced by 13 percent. The White House says fiscal austerity is forcing a narrowing of R&D priorities. “While recognizing the continued importance of R&D spending to support innovation, fiscal prudence demands a more focused approach to the Federal R&D budget in the context of America’s multi-sector R&D enterprise. This approach prioritizes maintaining peace through strength and ensures U.S. leadership in the Industries of the Future,” the White House said in its R&D overview. AI is considered one of four Industries of the Future, along with quantum information science, advanced communications systems such as 5G and advanced manufacturing. Today the White House sent another signal that it wants to raise the profile of AI research by launching a new internet portal about its policy: . The website pulls together the administration’s policies, documents and program descriptions relating to AI. “The White House’s newly unveiled illustrates our whole of government approach to national artificial intelligence policy and the historic strides this administration has made over the past two years,” Michael Kratsios, deputy assistant to the president for technology policy, said in a news release. “We look forward to continued advancements solidifying America’s position as the world leader in AI and ensuring this emerging technology is developed and applied for the benefit of the American people.” Will the White House’s AI spending plan get through Congress? It’s likely to get some tweaks along the way, but lawmakers have been generally supportive of AI initiatives. In contrast, the White House’s wider plan to trim back on R&D spending is facing pushback from the scientific community and some congressional leaders.
Who’ll serve as AI’s watchdog? Experts trade suggestions at AI2 policy workshop

Who’ll serve as AI’s watchdog? Experts trade suggestions at AI2 policy workshop

8:40pm, 7th March, 2019
Seattle University’s Tracy Kosa, the University of Maryland’s Ben Shneiderman and Rice University’s Moshe Vardi take questions during an AI policy workshop at the Allen Institute for Artificial Intelligence, moderated by AI2 CEO Oren Etzioni. (GeekWire Photo / Alan Boyle) Do we need a National Algorithm Safety Board? How about licensing the software developers who work on critical artificial intelligence platforms? Who should take the lead when it comes to regulating AI? Or does AI need regulation at all? The future of AI and automation, and the policies governing how far those technologies go, took center stage today during a policy workshop presented by Seattle’s Allen Institute for Artificial Intelligence, or AI2. And the experts who spoke agreed on at least one thing: Something needs to be done, policy-wise. “Technology is driving the future — the question is, who is doing the steering?” said Moshe Vardi, a Rice University professor who focuses on computational engineering and the social impact of automation. Artificial intelligence is already sparking paradigm shifts in the regulatory sphere: For example, when a Tesla car owner was killed in a 2016 highway collision, the National Transportation Safety Board at the company’s self-driving software. (And there have been such for the NTSB to investigate since then.) The NTSB, which is an , may be a useful model for a future federal AI watchdog, said Ben Shneiderman, a computer science professor at the University of Maryland at College Park. Just as the NTSB determines where things go wrong in the nation’s transportation system, independent safety experts operating under a federal mandate could analyze algorithmic failures and recommend remedies. One of the prerequisites for such a system would be the ability to follow an audit trail. “A flight data recorder for every robot, a flight data recorder for every algorithm,” Shneiderman said. He acknowledged that a National Algorithm Safety Board may not work exactly like the NTSB. It may take the form of a “SWAT team” that’s savvy about algorithms and joins in investigations conducted by other agencies, in sectors ranging from health care to highway safety to financial markets and consumer protection. Ben Shneiderman, a computer science professor at the University of Maryland at College Park, says the National Transportation Safety Board could provide a model for regulatory oversight of algorithms that have significant societal impact. (GeekWire Photo / Alan Boyle)) What about the flood of disinformation and fakery that AI could enable? That might conceivably fall under the purview of the Federal Communications Commission — if it weren’t for the fact that a provision in the 1996 Communications Decency Act, known as , absolves platforms like Facebook (and, say, your internet service provider) from responsibility for the content that’s transmitted. “Maybe we need a way to just change [Section] 230, or maybe we need a fresh interpretation,” Shneiderman said. Ryan Calo, a law professor at the University of Washington who focuses on AI policy, noted that the Trump administration isn’t likely to go along with increased oversight of the tech industry. But he said state and local governments could play a key role in overseeing potentially controversial uses of AI. Seattle, for example, that requires agencies to take a hard look at surveillance technologies before they’re approved for use. Another leader in the field is New York City, which has to monitor how algorithms are being used. Determining the lines of responsibility, accountability and liability will be essential. Seattle University law professor Tracy Kosa went so far as to suggest that software developers should be subject to professional licensing, just like doctors and lawyers. “The goal isn’t to change what’s happening with technology, it’s about changing the people who are building it, the same way that the Hippocratic Oath changed the way medicine was practiced.” she said. The issues laid out today sparked a lot of buzz among the software developers and researchers at the workshop, but Shneiderman bemoaned the fact that such issues haven’t yet gained a lot traction in D.C. policy circles. That may soon change, however, due to AI’s rapid rise. “It’s time to grow up and say who does what by when,” Shneiderman said. Odds and ends from the workshop: Vardi noted that there’s been a lot of talk about ethical practices in AI, but he worried that focusing on ethics was “almost a ruse” on the part of the tech industry. “If we talk about ethics, we don’t have to talk about regulation,” he explained. Calo worried about references to an “AI race” or use of the term by the White House. “This is not only poisonous and factually ridiculous … it leads to bad policy choices,” Calo said. Such rhetoric fails to recognize the international character of the AI research community, he said. Speaking of words, Shneiderman said the way that AI is described can make a big difference in public acceptance. For example, terms such as “Autopilot” and “self-driving cars” may raise unrealistic expectations, while terms such as “adaptive cruise control” and “active parking assist” make it clear that human drivers are still in charge. Over the course of the day, the speakers provided a mini-reading list on AI policy issues: by Shoshana Zuboff; by Cathy O’Neil; a white paper distributed by IEEE; and an oldie but goodie by Charles Perrow.
The Samsung S10’s cameras get ultra-wide-angle lenses and more AI smarts

The Samsung S10’s cameras get ultra-wide-angle lenses and more AI smarts

2:34pm, 20th February, 2019
Samsung’s features a whopping four models, the S10e, the S10, the S10+ and the S10 5G. Unsurprisingly, one of the features that differentiates these models is the camera system. Gone are the days, after all, where one camera would suffice. Now, all the S10 models, except for the budget S10e, feature at least three rear cameras and the high-end 5G model even goes for four — and all of them promise more AI smarts and better video stabilization. All models get at least a standard 12MP read wide-angle camera with a 77-degree field of view, a 16MP ultra-wide-angle camera for 123-degree shots, and a 10MP selfie camera. The standard S10 then adds a 12MP telephoto lens to the rear camera setup and then S10+ gets an 8MP RGB depth camera. The high-end S10 5G adds a hQVGA 3D depth camera to both the front and rear setup. The ultra-wide lens is a first for Samsung’s flagship S10 series, though it’s a bit late to the game here given that others have already offered these kind of lenses on their phones before. Still, if you are planning on getting an S10, this new lens will come in handy for large group shots and landscape photos. On the video front, Samsung promises better stabilization, UHD quality for both the rear and front cameras and HDR10+ support for the rear camera. These days, though, it’s all about computational photography and like its competitors, Samsung promises that its new cameras are also significantly smarter than its predecessors. Specifically, the company is pointing to its new scene optimizer for the S10 line which uses the phone’s neural processing unit to recognize and process up to 30 different scenes and also offer shot suggestions to help you better frame the scene. Since we haven’t actually used the phones yet, though, it’s hard to say how much a difference those AI smarts really make in day-to-day use.
Xnor’s saltine-sized, solar-powered AI hardware redefines the edge

Xnor’s saltine-sized, solar-powered AI hardware redefines the edge

1:43pm, 13th February, 2019
“If AI is so easy, why isn’t there any in this room?” asks Ali Farhadi, founder and CEO of Xnor, gesturing around the conference room overlooking Lake Union in Seattle. And it’s true — despite a handful of displays, phones, and other gadgets, the only things really capable of doing any kind of AI-type work are the phones each of us have set on the table. Yet we are always hearing about how AI is so accessible now, so flexible, so ubiquitous. And in many cases even those devices that can aren’t employing machine learning techniques themselves, but rather sending data off to the cloud where it can be done more efficiently. Because the processes that make up “AI” are often resource-intensive, sucking up CPU time and battery power. That’s the problem Xnor aimed to solve, or at least mitigate, when it . Its breakthrough was to make the execution of deep learning models on edge devices so efficient that a $5 Raspberry Pi Zero could perform state of the art nearly well as a supercomputer. The team achieved that, and Xnor’s hyper-efficient ML models are now integrated into a variety of devices and businesses. As a follow-up, the team set their sights higher — or lower, depending on your perspective. Answering his own question on the dearth of AI-enabled devices, Farhadi pointed to the battery pack in the demo gadget they made to show off the Pi Zero platform, Farhadi explained: “This thing right here. Power.” Power was the bottleneck they overcame to get AI onto CPU- and power-limited devices like phones and the Pi Zero. So the team came up with a crazy goal: Why not make an AI platform that doesn’t need a battery at all? Less than a year later, . That thing right there performs a serious computer vision task in real time: It can detect in a fraction of a second whether and where a person, or car, or bird, or whatever, is in its field of view, and relay that information wirelessly. And it does this using the kind of power usually associated with solar-powered calculators. The device Farhadi and hardware engineering head Saman Naderiparizi showed me is very simple — and necessarily so. A tiny camera with a 320×240 resolution, an FPGA loaded with the object recognition model, a bit of memory to handle the image and camera software, and a small solar cell. A very simple wireless setup lets it send and receive data at a very modest rate. “This thing has no power. It’s a two dollar computer with an uber-crappy camera, and it can run state of the art object recognition,” enthused Farhadi, clearly more than pleased with what the Xnor team has created. For reference, this video from the company’s debut shows the kind of work it’s doing inside: As long as the cell is in any kind of significant light, it will power the image processor and object recognition algorithm. It needs about a hundred millivolts coming in to work, though at lower levels it could just snap images less often. It can run on that current alone, but of course it’s impractical to not have some kind of energy storage; to that end this demo device has a supercapacitor that stores enough energy to keep it going all night, or just when its light source is obscured. As a demonstration of its efficiency, let’s say you did decide to equip it with, say, a watch battery. Naderiparizi said it could probably run on that at one frame per second for more than 30 years. Not a product Of course the breakthrough isn’t really that there’s now a solar-powered smart camera. That could be useful, sure, but it’s not really what’s worth crowing about here. It’s the fact that a sophisticated deep learning model can run on a computer that costs pennies and uses less power than your phone does when it’s asleep. “This isn’t a product,” Farhadi said of the tiny hardware platform. “It’s an enabler.” The energy necessary for performing inference processes such as facial recognition, natural language processing, and so on put hard limits on what can be done with them. A smart light bulb that turns on when you ask it to isn’t really a smart light bulb. It’s a board in a light bulb enclosure that relays your voice to a hub and probably a datacenter somewhere, which analyzes what you say and returns a result, turning the light on. That’s not only convoluted, but it introduces latency and a whole spectrum of places where the process could break or be attacked. And meanwhile it requires a constant source of power or a battery! On the other hand, imagine a camera you stick into a house plant’s pot, or stick to a wall, or set on top of the bookcase, or anything. This camera requires no more power than some light shining on it; it can recognize voice commands and analyze imagery without touching the cloud at all; it can’t really be hacked because it barely has an input at all; and its components cost maybe $10. Only one of these things can be truly ubiquitous. Only the latter can scale to billions of devices without requiring immense investment in infrastructure. And honestly, the latter sounds like a better bet for a ton of applications where there’s a question of privacy or latency. Would you rather have a baby monitor that streams its images to a cloud server where it’s monitored for movement? Or a baby monitor that absent an internet connection can still tell you if the kid is up and about? If they both work pretty well, the latter seems like the obvious choice. And that’s the case for numerous consumer applications. Amazingly, the power cost of the platform isn’t anywhere near bottoming out. The FPGA used to do the computing on this demo unit isn’t particularly efficient for the processing power it provides. If they had a custom chip baked, they could get another order of magnitude or two out of it, lowering the work cost for inference to the level of microjoules. The size is more limited by the optics of the camera and the size of the antenna, which must have certain dimensions to transmit and receive radio signals. And again, this isn’t about selling a million of these particular little widgets. As Xnor has done already with its clients, the platform and software that runs on it can be customized for individual projects or hardware. One even wanted a model to run on MIPS — so now it does. By drastically lowering the power and space required to run a self-contained inference engine, entirely new product categories can be created. Will they be creepy? Probably. But at least they won’t have to phone home.
Xnor shrinks AI to fit on a solar-powered chip, opening up big frontiers on the edge

Xnor shrinks AI to fit on a solar-powered chip, opening up big frontiers on the edge

9:50am, 13th February, 2019
Xnor.ai machine learning engineer Hessam Bagherinezhad, hardware engineer Saman Naderiparizi and co-founder Ali Farhadi show off a chip that uses solar-powered AI. (GeekWire Photo / Alan Boyle) It was a big deal two and a half years ago when researchers the size of a candy bar — and now it’s an even bigger deal for Xnor.ai to re-engineer its artificial intelligence software to fit onto a solar-powered computer chip. “To us, this is as big as when somebody invented a light bulb,” Xnor.ai’s co-founder, Ali Farhadi, said at the company’s Seattle headquarters. Like the candy-bar-sized, Raspberry Pi-powered contraption, the camera-equipped chip flashes a signal when it sees a person standing in front of it. But the chip itself isn’t the point. The point is that Xnor.ai has figured out how to blend stand-alone, solar-powered hardware and edge-based AI to turn its vision of “artificial intelligence at your fingertips” into a reality. “This is a key technology milestone, not a product,” Farhadi explained. Shrinking the hardware and power requirements for AI software should expand the range of potential applications greatly, Farhadi said. “Our homes can be way smarter than they are today. Why? Because now we can have many of these devices deployed in our houses,” he said. “It doesn’t need to be a camera. We picked a camera because we wanted to show that the most expensive algorithms can run on this device. It might be audio. … It might be a way smarter smoke detector.” Outside the home, Farhadi can imagine putting AI chips on stoplights, to detect how busy an intersection is at a given time and direct the traffic flow accordingly. AI chips could be tethered to balloons or scattered in forests, to monitor wildlife or serve as an early warning system for wildfires. Xnor’s solar-powered AI chip is light enough to be lofted into the air on a balloon for aerial monitoring. In this image, the chip is highlighted by the lamp in the background. (Xnor. ai Photo) Sophie Lebrecht, Xnor.ai’s senior vice president of strategy and operations, said the chips might even be cheap enough, and smart enough, to drop into a wildfire or disaster zone and sense where there are people who need to be rescued. “That way, you’re only deploying resources in unsafe areas if you really have to,” she said. The key to the technology is reducing the required power so that it can be supplied by a solar cell that’s no bigger than a cocktail cracker. That required innovations in software as well as hardware. “We had to basically redo a lot of things,” machine learning engineer Hessam Bagherinezhad said. Xnor.ai’s head of hardware engineering, Saman Naderiparizi, worked with his colleagues to figure out a way to fit the software onto an FPGA chip that costs a mere $2, and he says it’s possible to drive the cost down to less than a dollar by going to ASIC chips. It only takes on the order of milliwatts of power to run the chip and its mini-camera, he told GeekWire. “With technology this low power, a device running on only a coin-cell battery could be always on, detecting things every second, running for 32 years,” Naderiparizi said in a news release. That means there’d be no need to connect AI chips to a power source, replace their batteries or recharge them. And the chips would be capable of running AI algorithms on standalone devices, rather than having to communicate constantly with giant data servers via the cloud. If the devices need to pass along bits of data, they could . That edge-computing approach is likely to reduce the strain of what could turn out to be billions of AI-enabled devices. “The carbon footprint of data centers running all of those algorithms is a key issue,” Farhadi said. “And with the way AI is progressing, it will be a disastrous issue pretty soon, if we don’t think about how we’re going to power our AI algorithms. Data centers, cloud-based solutions for edge-use cases are not actually efficient ways, but other than efficiency, it’s harming our planet in a dangerous way.” Farhadi argues that cloud-based AI can’t scale as easily as edge-based AI. “Imagine when I put a camera or sensor at every intersection of this city. There is no cloud that is going to handle all that bandwidth,” he said. “Even if there were, back-of-the-envelope calculations would show that my business will go bankrupt before it sees the light of day.” The edge approach also addresses what many might see as the biggest bugaboo about having billions of AI bugs out in the world: data privacy. “I don’t want to put a camera in my daughter’s bedroom if I know that the picture’s going to end up in the cloud,” Farhadi said. Xnor.ai was , or AI2, only a couple of years ago, and the venture is with millions of dollars of financial backing from Madrona Venture Group, AI2 and other investors. Farhadi has faith that the technology Xnor.ai is currently calling “solar-powered AI” will unlock still more commercial frontiers, but he can’t predict whether the first applications will pop up in the home, on the street or off the beaten track. “It will open up so many different things, the exact same thing when the light bulb was invented: No one knew what to do with it,” he said. “The technology’s out there, and we’ll figure out the exact products.”
Xnor shrinks AI to fit on a solar-powered chip, opening big frontiers on the edge

Xnor shrinks AI to fit on a solar-powered chip, opening big frontiers on the edge

9:20am, 13th February, 2019
Xnor.ai machine learning engineer Hessam Bagherinezhad, hardware engineer Saman Naderiparizi and co-founder Ali Farhadi show off a chip that can use solar-powered AI to detect people. (GeekWire Photo / Alan Boyle) It was a big deal two and a half years ago when researchers the size of a candy bar — and now it’s an even bigger deal for Xnor.ai to re-engineer its artificial intelligence software to fit onto a solar-powered computer chip. “To us, this is as big as when somebody invented a light bulb,” Xnor.ai’s co-founder, Ali Farhadi, said at the company’s Seattle headquarters. Like the candy-bar-sized, Raspberry Pi-powered contraption, the camera-equipped chip flashes a signal when it sees a person standing in front of it. But the chip itself isn’t the point. The point is that Xnor.ai has figured out how to blend stand-alone, solar-powered hardware and edge-based AI to turn its vision of “artificial intelligence at your fingertips” into a reality. “This is a key technology milestone, not a product,” Farhadi explained. Shrinking the hardware and power requirements for AI software should expand the range of potential applications greatly, Farhadi said. “Our homes can be way smarter than they are today. Why? Because now we can have many of these devices deployed in our houses,” he said. “It doesn’t need to be a camera. We picked a camera because we wanted to show that the most expensive algorithms can run on this device. It might be audio. … It might be a way smarter smoke detector.” Outside the home, Farhadi can imagine putting AI chips on stoplights, to detect how busy an intersection is at a given time and direct the traffic flow accordingly. AI chips could be tethered to balloons or scattered in forests, to monitor wildlife or serve as an early warning system for wildfires. Xnor’s solar-powered AI chip is light enough to be lofted into the air on a balloon for aerial monitoring. In this image, the chip is highlighted by the lamp in the background. (Xnor. ai Photo) Sophie Lebrecht, Xnor.ai’s senior vice president of strategy and operations, said the chips might even be cheap enough, and smart enough, to drop into a wildfire or disaster zone and sense where there are people who need to be rescued. “That way, you’re only deploying resources in unsafe areas if you really have to,” she said. The key to the technology is reducing the required power so that it can be supplied by a solar cell that’s no bigger than a cocktail cracker. That required innovations in software as well as hardware. “We had to basically redo a lot of things,” machine learning engineer Hessam Bagherinezhad said. Xnor.ai’s head of hardware engineering, Saman Naderiparizi, worked with his colleagues to figure out a way to fit the software onto an FPGA chip that costs a mere $2, and he says it’s possible to drive the cost down to less than a dollar by going to ASIC chips. It only takes on the order of milliwatts of power to run the chip and its mini-camera, he told GeekWire. “With technology this low power, a device running on only a coin-cell battery could be always on, detecting things every second, running for 32 years,” Naderiparizi said in a news release. That means there’d be no need to connect AI chips to a power source, replace their batteries or recharge them. And the chips would be capable of running AI algorithms on standalone devices, rather than having to communicate constantly with giant data servers via the cloud. If the devices need to pass along bits of data, they could . That edge-computing approach is likely to reduce the strain of what could turn out to be billions of AI-enabled devices. “The carbon footprint of data centers running all of those algorithms is a key issue,” Farhadi said. “And with the way AI is progressing, it will be a disastrous issue pretty soon, if we don’t think about how we’re going to power our AI algorithms. Data centers, cloud-based solutions for edge-use cases are not actually efficient ways, but other than efficiency, it’s harming our planet in a dangerous way.” Farhadi argues that cloud-based AI can’t scale as easily as edge-based AI. “Imagine when I put a camera or sensor at every intersection of this city. There is no cloud that is going to handle all that bandwidth,” he said. “Even if there were, back-of-the-envelope calculations would show that my business will go bankrupt before it sees the light of day.” The edge approach also addresses what many might see as the biggest bugaboo about having billions of AI bugs out in the world: data privacy. “I don’t want to put a camera in my daughter’s bedroom if I know that the picture’s going to end up in the cloud,” Farhadi said. Xnor.ai was , or AI2, only a couple of years ago, and the venture is with millions of dollars of financial backing from Madrona Venture Group, AI2 and other investors. Farhadi has faith that the technology Xnor.ai is currently calling “solar-powered AI” will unlock still more commercial frontiers, but he can’t predict whether the first applications will pop up in the home, on the street or off the beaten track. “It will open up so many different things, the exact same thing when the light bulb was invented: No one knew what to do with it,” he said. “The technology’s out there, and we’ll figure out the exact products.”
Univ. of Washington spinout takes on $5.2B scheduling problem with custom AI analytics

Univ. of Washington spinout takes on $5.2B scheduling problem with custom AI analytics

12:00pm, 9th May, 2018
(Bigstock Photo)stock Hospitals have to solve a thousand logistical challenges every day, but perhaps none are more difficult than operating room schedules. Surgeries can be difficult to predict — in fact, less than half of surgeries in the U.S. start and end on time. That can create chaos for patients and doctors, and costs hospitals $5.2 billion every year, according to University of Washington . The startup, which develops a variety of technologies for hospitals, is taking aim at the operating room problem with a new AI technology that uses data on patients and surgeons to more accurately predict how long each surgery will take. The startup recently deployed the technology at a large academic medical institution in Seattle. So far, it has cut the number of surgeries that run over their scheduled time by 20 percent, a result that could save a hospital $1 million a year in staff overtime alone. Perimatics Co-Founder and CEO Kalyani Velagapudi. (Perimatics Photo) The startup is still studying how its technology affects underage, or the number of surgeries that end before the predicted time, and other elements including patient and employee satisfaction. Perimatics’ algorithm begins by looking at a patient’s data and seeking out information that will affect how long the surgery takes, like the patient’s prior surgeries and their age. , Perimatics co-founder and CEO, told GeekWire that the surgeons themselves also have a big impact on how long a surgery takes. Each surgeon approaches an operation differently and will bring in various factors that affect the length of the operation. “That was a surprise,” said , Perimatics’ chief solutions architect and co-founder. “We had to build machine learning models customized for each surgeon.” The algorithm also takes into account the staff that will work on the procedure, like anesthesiologists. It can also suggest last-minute scheduling adjustments when operating rooms are needed for emergency procedures. Bala Nair, Perimatics’ co-Founder and chief solutions architect. (Perimatics Photo) The end goal is to help hospitals cut down the $5.2 billion a year that results from overage and underage in surgeries. In addition to staff overtime costs, operation rooms cost an estimated to run, so any variation from the set schedule can quickly become extortionate. That’s not to mention factors like patient and employee dissatisfaction, which is also a common side effect of scheduling challenges. Although this is the first time the technology has been deployed in a hospital system, Nair said it is easily scalable. Now that Perimatics has worked out which factors impact surgery length, the basic framework can be applied to almost any hospital, he said. Velagapudi said the startup is continuing work on its other AI technologies, including its Smart Anaesthesia Manager. That program, invented by Bala, analyzes a patient’s health metrics in real-time during surgery and helps doctors make decisions that have a big impact on a patient’s health when they are recovering. She also said the company is working on new solutions for post-surgery problems and surgical supplies. “It is quite different from the data science that is being done on the market today because it is real time,” Velagapudi said of the startup’s work. Perimatics spun out from the University of Washington last year and currently employs 7 at its headquarters in Bellevue, Wash. It is also a partner of , the tech giant’s startup assistance program.