Apple is beefing up its artificial intelligence team, in an apparent attempt to make iPhones clever enough to know what they’re users want before they do.
The company has launched a huge hiring push to take on more experts in machine learning — a branch of computing that aims to make devices that think like humans. The push is likely part of Apple’s attempts to make iPhones more clever and able to predict and then anticipate what users are looking for, which is being built in to its personal assistant, Siri.
A recent announcement indicates that, “UBIC, Inc. (UBIC) (TSE:2158) (“UBIC” or “the Company”), a leading provider of AI-based, big data analysis services, and BI.Garage, Inc. (“BI.Garage”) announced today that they have formed a business alliance to provide a social networking site (SNS) marketing support service that combines BI.Garage’s expertise in social media marketing and UBIC’s artificial intelligence (AI) technology. This will be the first AI-based SNS marketing support service in Japan. Initially, BI.Garage will equip Tweetmanager, a Twitter account operation support tool that it has developed and provided since 2009, with a new function to work in collaboration with UBIC’s Virtual Data Scientist (VDS) and related technologies. VDS technology allows Tweetmanager to quickly analyze the massive volume of text posted on Twitter. Companies using Tweetmanager will be able to analyze their marketing strategies by reviewing and incorporating the information provided by VDS related to Twitter users. Companies with access to this user information will be able to make more effective business actions.”
Apple has posted a lot of job openings dealing with artificial intelligence lately.
Having more AI-focused employees at Apple would likely have the biggest impact on Siri going forward, but it’s an odd way to invest the company’s funds considering Apple has taken a hard stance against collecting users’ data.
“Machine learning,” better known as AI, absolutely requires massive amounts of data to be analyzed by a program before it can even begin to operate in the way we’d expect it to.
At the International Conference on Intelligent Robots and Systems in September, members of the Singapore-MIT Alliance for Research and Technology (SMART) and their colleagues will describe an experiment conducted over six days at a large public garden in Singapore, in which self-driving golf carts ferried 500 tourists around winding paths trafficked by pedestrians, bicyclists, and the occasional monitor lizard.
The experiments also tested an online booking system that enabled visitors to schedule pickups and drop-offs at any of 10 distinct stations scattered around the garden, automatically routing and redeploying the vehicles to accommodate all the requests.
In the B2B supply chain, technology is introducing significant improvements to the way businesses deal with each other. SaaS and Big Data have both played a role in making the supply chain more efficient. Hitachi’s new technology is propelling B2B supply chain technology even further with its new innovation — and one that is likely to impact more B2B business processes than supply chain management.
On Friday (Sept. 4), Japan-based technology conglomerate Hitachi revealed that it has developed supply chain artificial intelligence (AI) to help businesses manage their work orders and gain deeper insight into demand fluctuations.
A who’s-who grouping of the world’s most prominent minds has signed onto a letter urging robotics researchers to be extremely cautious in developing artificial intelligence (AI) technology, warning that an inevitable military AI arms race could (and likely will) unfold, leading to “a third revolution in warfare.”
Apple co-founder Steve Wozniak, Tesla’s Elon Musk, scientist Stephen Hawking and more than 1,000 others, presenting at the recent International Joint Conference on Artificial Intelligence in Argentina, obviously see the writing on the wall: If AI technologies continue to develop unabated, they say, autonomous weapons systems that operate without human input will eventually commit atrocities like mass genocide and ethnic cleansing campaigns.
When we talk about artificial intelligence (AI) – which we have done lot recently, including my outline on The Conversation of liability and regulation issues – what do we actually mean?
AI experts and philosophers are beavering away on the issue. But having a usable definition of AI – and soon – is vital for regulation and governance because laws and policies simply will not operate without one.
This definition problem crops up in all regulatory contexts, from ensuring truthful use of the term “AI” in product advertising right through to establishing how next-generation automated weapons systems (AWSs) are treated under the laws of war.
Artificial intelligence will characterize the next major wave of IT innovation, according to Pat Gelsinger, CEO of VMware.
It was the concluding point of his Tuesday keynote at VMworld, which focused on the changing nature of the enterprise. As business becomes increasingly digital, mobile, and cloud-based, the way we run businesses ought to change as well, he said.
Those changes are happening already, but in the background, he sees AI starting to make its presence known. It’s true AI has been “on the way” for decades. But Gelsinger believes its time is finally imminent, and he’s not alone; he recounted a recent conversation with Stanford University President John Hennessey, who believes AI is the most important upcoming wave of technology.
Some of the paintings you see above were painted by some of the most renowned artists in human history. The others were made by an artificial intelligence.
Robotic brains have a ways to go before they match the masters in terms of pure creativity, but it seems they’ve gotten quite good at mimicking and remixing what they see. In a study published late last week by researchers from the University of Tubingen in Germany, researchers described an artificial intelligence neural network capable of lifting the “style” of an image and using that style to copy another image, which is why you see these waterfront houses look as though they were painted by Picasso, van Gogh, or Munch.
As you might expect, the math is quite complex, but the basic idea is pretty simple. As the researchers explain, computers are getting very good at image recognition and reproduction. The neural network basically does two jobs, then: One layer analyzes the content of an image, while another analyzes its texture, or style. These functions can also be split to work across two images.
IBM is testing a new way to alleviate Beijing’s choking air pollution with the help of artificial intelligence. The Chinese capital, like many other cities across the country, is surrounded by factories, many fueled by coal, that emit harmful particulates. But pollution levels can vary depending on factors such as industrial activity, traffic congestion, and weather conditions.
The IBM researchers are testing a computer system capable of learning to predict the severity of air pollution in different parts of the city several days in advance by combining large quantities of data from several different models—an extremely complex computational challenge. The system could eventually offer specific recommendations on how to reduce pollution to an acceptable level—for example, by closing certain factories or temporarily restricting the number of drivers on the road. A comparable system is also being developed for a city in the Hebei province, a badly affected area in the north of the country.