iPhone 8 Launch in September, But Availability Will Be Limited: KGI Securities

iPhone 8 Launch in September, But Availability Will Be Limited: KGI Securities

HIGHLIGHTS

  • KGI claims iPhone 8 to come in three colours only
  • The smartphone will go into mass production in mid-September
  • The iPhone 8 photos suggest no Touch ID integration

iPhone 8, the tenth anniversary Apple smartphone, has been in the news for nearly a year, and it seems the time is finally coming for the handset to become a reality. The production issues the smartphone faced have been reported several times now, but a fresh report from one of the most reliable sources of Apple leaks claims that the iPhone 8 will launch in September, but availability will be limited at first. Furthermore, iPhone 8 live photos have appeared online, along with the casing giving us more testimony to its design.

iPhone 8 launch in September

KGI Securities’ Ming-Chi Kuo is back with his sporadic predictions on Apple’s plans this fall, and he claims that the iPhone 8 has managed to emerge through tribulations, and will be unveiled on time in September. The smartphone will go appear for a product verification test in August and into mass production in mid-September, but will be available in limited quantities at first. In fact, the report states that the supply chain will produce between 2 million to 4 million units only this quarter. KGI’s predictions were obtained by Apple Insider.

However, the production is expected to ramp up quickly to 45 million and 50 million this year. Apple will produce 35 million to 38 million iPhone 7s units, and 18 million to 20 million iPhone 7s Plus handsets this year, according to Kuo.

The report also claims that all iPhone variants will support fast charging through a separately sold accessory – a Lightning-to-USB-C cable and wall adapter. It will also be made available in three colour variants – Black, Silver, and Gold.

iPhone 8 leaked images

Separately, the iPhone 8 was leaked in photos as well on Techtastic.com. The black colour variant was shown from the front and back, and it sports a vertical dual camera setup, a bezel-less display, and no fingerprint scanner at the front or back. While there is no image showing off the sides, it seems like Apple may have foregone Touch ID altogether, to give Face ID prominence. The casing of the iPhone 7s, iPhone 7s Plus, and iPhone 8 have also been leaked alongside, and it shows a cut out at the back of the iPhone 8. Even though the cut out indicates that the fingerprint scanner will be embedded there, but it could also just be for the Apple logo, and nothing else.

iphone8 main techtastic iPhone 8

Photo Credit: TechTastic

Expect more clarity on the fate of Touch ID as we near launch, and more detailed leaks emerge. For now, the iPhone 8 is expected to sport an almost bezel-less OLED screen, wireless charging, no home button, no 3.5mm audio jack, an Apple A11 chip, and AR features.

[“Source-gadgets.ndtv”]

Jio Phone’s 3-Year Lock-In Period, Monthly Plan Cost, May Be Obstacles in Adoption: JP Morgan

Jio Phone's 3-Year Lock-In Period, Monthly Plan Cost, May Be Obstacles in Adoption: JP Morgan

HIGHLIGHTS

  • JioPhone is can be bought with a three-year lock-in period
  • JP Morgan report feels that this could be an obstacle
  • Users in India prefer multi-SIM devices and the prepaid churn is >5%

At its Annual General Meeting this year, Reliance Jio announced its first ‘effectively free’ 4G phone – the JioPhone. The catch is to get a JioPhone, users will have to pay a deposit of Rs. 1,500 upfront, refundable after three years. Now, weeks after the launch, a fresh report questions the JioPhone’s feasibility in the growing Indian market, specifically, the growing smartphone market. Jio is asking for a three-year lock in period for the JioPhone, only after which the refund will be generated, and this, along with the pricing of the monthly plan rentals, may be obstacles in the JioPhone’s adoption, claims a recent JP Morgan report.

The JP Morgan report shows apprehension in the success of the JioPhone. The report claims that the Indian audience is more accustomed to multi-SIM devices, and the prepaid monthly churn is less than 5 percent. Keeping this in mind, it would be unrealistic to expect users to get locked into three years for a single-SIM variant that works only with Jio network and SIM.

Furthermore, some usage plans are priced at Rs. 153 for a month. A user who can afford Rs. 153 a month is normally considered well enough off to already be a smartphone user, or become one before the three-year run is over, according to the report. The report notes 40 percent smartphone penetration for the largest telecom operators, such as Airtel and Idea, to be another factor highlight the trend towards smartphone migration in the market. This, coupled with the lack of a massive ecosystem like Android (and apps like WhatsApp), would limit the perceived smartness of the JioPhone, which is being marketed as “India ka Smartphone”. Finally, the report notes that Reliance Jio may be catering to the feature-phone segment instead of the entry-level smartphone segment it pitched at the AGM.

“Is JioPhone largely an up-trade proposition for the low-end ARPU sub segments or can it also do enough to persuade consumers to downtrade as well? – In our view, the answer to this would really depend on whether consumers are content enough with the experience/performance of the JioPhone and find compromises in downtrade acceptable. Then, there is also the matter of finding the trade-off between following a relatively walled ecosystem (which no doubt helps propagate RJio’s family of proprietary apps) vs. an open ecosystem,” JP Morgan further countered in its report, obtained by Telecom Talk.

Also, telcos like Airtel, Vodafone, and Idea will come up with their own 4G VoLTE feature phone offerings to counter the JioPhone, but it still gives Mukesh Ambani a head-start of 9 to 12 months. These telcos will only launch counter products after they manage to get a threshold VoLTE network up and running.

“We think the stakes could actually be higher for the OEMs (that are not engaged with RJio), who would not want to left out of this new phone segment as their existing bread-and-butter feature phone business comes under threat. So, incumbents may be actually able to exercise leverage over the OEMs in the coming months,” JP Morgan said in its report.

In any case, the Jio Phone booking process will start from August 24, both online and offline, and here’s all that you need to know if you’re looking to get one for yourself.

[“Source-gadgets.ndtv”]

ALIS in Blunderland: Lockheed says F-35 Block 3F software to be done by year’s end

Image result for ALIS in Blunderland: Lockheed says F-35 Block 3F software to be done by year's endF-35 software development will be finished by the end of this year, Lockheed Martin has said – which contradicts the view of various American government audit agencies.

“We are well positioned to complete air vehicle full 3F and mission systems software development by the end of 2017,” said exec veep Jeff Babione, in a statement announcing that the fleet has passed its 100,000th flying hour.

The supersonic fighter jet’s onboard software will eventually be running Block 3F, the final revision as referred to by Babione. Ground operations will be supported by the Autonomous Logistics Information System (ALIS) suite, which was stuck on version 2.0.1.3 at the beginning of this year thanks to delays in rolling out a newer version.

Meanwhile, the Pentagon’s director of operational test and evaluation told a US Congress committee earlier this year that the aircraft won’t be ready before 2019, mentioning 158 “Category 1” software flaws that could cause death, severe injury or illness unless fixed.

The USAF hit back at these reports, announcing in May that Block 3F would be ready by “September or October” this year. Block 4 is said to be already in development, in spite of the delays to Block 3F. New software “drops” will be rolled out about every two years, with Block 4 scheduled for the beginning of the 2020s.

United Press International also took a closer look at the Block 3F aircraft software release, citing a US Government Accountability Office report from last year. “Using historical data of delays over the course of the program, GAO estimates that the Block 3F software package might need until May 2018 to finish SDD, as opposed to the program office’s projection of Oct 2017 to the end of the year,” reported the site.

The Times reported earlier this month that the cost of UK F-35Bs had increased amid various problems with the jets and the aircraft carriers that will host them in UK service, although informed defence sources immediately cast doubt on the accuracy of the paper’s reporting. Among other things it linked poor “broadband” speeds available to the warships’ crews with the effectiveness of air-to-ship communications links, as well as describing the NATO Link 16 encrypted communications standard as an “unsecured wavelength”.

The paper also pegged the cost of each aircraft at around £198m, though post-publication commentary appeared to suggest that the newspaper had included the through-life costs of each aircraft (i.e. spare parts, software upgrades, fuel, etc). The majority of public cost estimates are done on an “upfront” basis and do not include projected spares packages and the like.

Delays and cost overruns have been a frequent feature of the F-35 saga. Given the sheer quantity of software in not only the aircraft but its ALIS logistics suite too, it should be no surprise that costs have spiralled and unforeseen delays reared their heads. After all, since when did any Big Government IT Project run on time and within budget? ®

[“Source-theregister”]

Google Street View Can Now Be Used to Explore the International Space Station

Google Street View Can Now Be Used to Explore the International Space Station

HIGHLIGHTS

  • The imagery already available in Google Street View
  • The astronaut captured imagery over six months
  • Space constraints posed difficulties in capturing imagery

Google Street View in search giant’s Maps service is one of those features that has given users a much better idea of various locations around the world than satellite imagery ever could. Moving forward leaps and bounds, Google Street View now allows users to see the International Space Station (ISS) as close as they can see the streets of London from their homes.

The search giant has launched a new option for Google Street View that allows users to see the 15 connected modules of the ISS. Thomas Pesquet, an astronaut at the European Space Agency (ESA), spent six months on the International Space Station (ISS) as a flight engineer to capture the Street View imagery, Google said in its blog post.

“The mission was the first time Street View imagery was captured beyond planet Earth, and the first time annotations – helpful little notes that pop up as you explore the ISS – have been added to the imagery,” Google said. While this is certainly an interesting option for users, Pesquet explained that due to the constraints of living and working in space, Google’s usual methods of capturing Street View couldn’t be used.

“Instead, the Street View team worked with NASA at the Johnson Space Center in Houston, Texas and Marshall Space Flight Center in Huntsville, Alabama to design a gravity-free method of collecting the imagery using DSLR cameras and equipment already on the ISS,” he said.

Post this, Pesquet sent the still photos captured by him to the Earth where they were stitched together to create panaromic 360-degree imagery of the ISS.

As pointed out in a report by TechCrunch, when the imagery was being captured, one of Space X’s Dragon vehicles was parked at the ISS. This means that users can also see how the cargo is supplied to the ISS. You can check out the new imagery from space already from Street View section on the company’s website.

[“Source-gadgets.ndtv”]

3 Reasons Your Website Will Never Be Finished

Image result for 3 Reasons Your Website Will Never Be Finished

Say, I’ve got some news for you: Your company’s website will never be finished. You will never sit back, breathe a sigh of relief, and say, “Finally! We’ve got this thing wrapped up; now we can move onto other things.”

That is, this will never happen if you’re doing all you should with your website. And this adds up to some good news because if you’re constantly updating your site, you’ll develop an advantage over your competitors who aren’t.

 

Here are three reasons you should never stop working on your website:

1. Web design trends are evolving. Compare websites designed within the past few months with those designed a few years ago, and you’ll notice some differences. Web design trends can sometimes be mere fads, but often they are driven by changes in technology. Two modern trends in web design are flat design and responsive design.

Gradients, drop shadows, bevels and elements designed to resemble real objects have no place in flat web design. Proponents of flat design eschew the fancy in favor of simplicity, clean lines, bold colors and a focus on content and usability. Flat design also means cleaner code, faster-loading pages (good for SEO) and greater adaptability, which factors into the next trend.

Responsive web design means that a site responds to the various sizes of screens that people use to view websites. Today someone might look at a site on a desktop monitor, a tablet or a smartphone, which come in different sizes.

Years ago, most companies had either a separate mobile site that would be displayed for users on a tablet or smartphone and a full website that would appear for desktop users. But this strategy was less than ideal because those websites were geared toward only two screen sizes. Responsive websites take into account all screen sizes and adjust to provide an optimal experience for every user. This leads to greater website-visitor retention. As a result, companies today are ditching the dedicated desktop and mobile sites in favor of a single, responsive website. (FlatInspire.com displays websites that are both flat and responsive.)

 

2. Consumer preferences are changing. Customers expect something different from your website now than did two, five or 10 years ago. When high-speed internet became widely available, users started to anticipate rich content, such as high-resolution photography and HD videos. As desktop screens grew larger and wider, consumers looked for sites that would take advantage of the additional real estate.

This year the number of smartphone users worldwide is expected to surpass 1.75 billion, prompting a toward a move toward long, vertical websites that scroll.

Today’s consumers don’t want to waste time. Everyone is busy and wants to get to the point as efficiently as possible. Many companies have understood this to mean that content should be clear and concise.

While brevity may the the soul of wit, consumers don’t always want webpages short on content. What they want is high-quality content that delivers real value. Sometimes the best way to do this is through long-form content. Basecamp performed an experiment with long-form content on its home page and found signups for its project management software rose 37.5 percent.

Design agency Teehan+Lax embraces long-form content in its portfolio section, in a post about working with client Krush. The segment delivers value, by helping potential clients understand what the process of working with the company would be like. Long-form content is also good for  SEO.

 

3. Search engine optimization rules. The premise of SEO is that if a company sells widgets and its site shows up No. 1 in a Google search for the term “widgets,” then viewers will be drawn to that corporate site. But it may not be the only company desiring to market widgets. Therefore, the company’s task is to convince Google that when someone searches for widgets, any user arriving at the company’s website will find it especially appropriate for the search term. If users aren’t happy with Google’s search results, that’s bad for Google.

It used to be that a lot of SEO firms would trick Google into sending traffic to their clients’ websites. But Google employs thousands of people with doctorates to systematically filter out search engine spam. Google’s search algorithm updates like Panda, Penguin, and Hummingbird have forced websites to provide real value to visitors or see their rankings in the search engines fall and traffic dry up. Although some aspects of SEO can be done just once (such as ensuring that you have a credible web-hosting firm and solid code on your website so that it loads quickly), here are some ongoing activities that companies can engage in to get good search-engine rankings and drive traffic to their site:

  • Attract inbound links from high quality, relevant websites.
  • Create content that people enjoy reading and want to share.
  • Update the corporate website frequently with high-quality content.
  • Keep up with design trends to make the website fresh and attractive.

Creating new content and attracting links can mean updating a blog and press section, or developing valuable informational resource sections like tips, FAQs; or articles. It also helps for the company to become an expert in your field and engage in online PR. And yes, even guest blog posting is still a viable tactic for link building, as long as it’s of high quality.

 

 

[Source:- Entrepreneur]

Marshmallow found to be running only on 18.7% of Android devices

With Android 7.0 Nougat preparing a big arrival in the coming weeks, what does last year’s Android 6.0 Marshmallow look like? Well, not so good according to the Android distribution figures posted by Google. It is shown that the late 2015 release of Android is only found on 18.7% of all Android devices out there. This is quite disappointing, especially since it’s been nearly 12 months since the OS was publicly released.

While we don’t have actual numbers, the percentages give us a good idea of the fragmentation in the Android platform. Unsurprisingly, Android 7.0 hasn’t even made it to the list as it is running on less than 0.1% of the devices. This is not much of a surprise as the Nougat update is yet to go official commercially and is only available in beta builds.

Android 4.4 KitKat is still on 27.7% of devices while Android 5.0 and 5.1 collectively make up for 35% of the Android population. It is hoped that devices that are currently running Lollipop will eventually transition into Marshmallow and increase those figures somewhat. But going by the current trend, we don’t see Nougat seeing a lot of marketshare even six months down the line.

While Google is quite prompt with its updates, manufacturers are often lagging behind. This is perhaps the primary cause for so many Android devices being left behind and we hope Google comes up with a solution to this sooner rather than later.

 

 

[Source:- Techrader]

Why you should be using pattern libraries

featured_patternlibraries

Have you heard of pattern libraries, style guides, component libraries, design patterns or UI toolkits? Don’t worry if you’re confused or don’t know the differences. Here’s a secret—most people in the design industry are also a little confused.

With all these terms flying around it can quickly become overwhelming. But rest assured it’s actually much less complex than you might first think.

All these different terms can be grouped into two different categories:

1) STYLE GUIDES

These are brand guidelines for a website. They contain the logo, colours and typography. A style guide takes all the relevant parts of the brand guidelines and places them together.

2) PATTERN LIBRARIES / COMPONENT LIBRARIES / UI TOOLKITS

All these terms refer to the same thing.

They are a collection of reusable components that make up a website. Pattern libraries (as I’ll refer to them from now) are a way to represent everything that makes up a website. This includes the layout, structure and everything that is contained within them.

On an eCommerce website this would include a product item, a review, star rating, quantity, navigation, tables and buttons, to name a few. Each of these are called a component.

So, a pattern library is a collection of components that make up the website.

PATTERN LIBRARIES VS STYLE GUIDES

Websites require both a style guide and pattern library. They’ll often live together which might be where a lot of the confusion comes from.

Style guides apply branding while pattern libraries apply layout and structure. For example, the style guide for Levis would dictate the website should use red with a heavy font but the pattern library would dictate a product listing item should contain an image, title and price.

 

 

 

[Source:- webdesignerdepot]

Nintendo Switch Joy-Con Charging Grip Will Be Sold Separately

Nintendo Switch Joy-Con Charging Grip Will Be Sold Separately

It’s come to light that the Joy-Con Grip, included in the $300 Nintendo Switch launch bundle, doesn’t let you charge your controllers while you play. Instead, you will need a separately sold Joy-Con Charging Grip, which costs $30 (about Rs. 2,000).

The standard Joy-Con Grip is merely a matte finish hanger for your pair of Joy-Con, which gives you the more traditional feel of a controller. It’s not the same thing as the one sold separately, unlike what you’d naturally assume.

The hint lies in the name – the one sold standalone is called a Joy-Con Charging Grip, which includes a USB slot unlike the grip sold with the Switch. That means the premium translucent $30 addition allows you to charge your pair of Joy-Con while your Switch device is docked into its station.

Not bundling this with the Switch seems like a bit of a cheap move on Nintendo’s part, which will most likely justify its omission to keep the base cost low. For what it’s worth, the Joy-Con has a battery life of around 20 hours, though it will get shorter with use.

It does take three and a half hours to charge them back though, so if you don’t get the Charging Grip, you won’t be able to play during that time in docked mode.

This also ties into the bigger issue of accessory cost that’s already piling up with the Switch – a Switch Pro Controller, more ergonomic and longer-lasting, costs $70 (about Rs. 4,800). An additional pair of Joy-Con, should you need one, will also run you $80 (about Rs. 5,400).

Nintendo is also not bundling a single game with the console, in a contrast to how Wii came with Wii Sports. Two launch titles that have a price tag – The Legend of Zelda: Breath of the Wild, and 1, 2, Switch – come in at $60 and $50 respectively (about Rs. 4,000 and Rs. 3,400).

 

 

[Source:- gadgets360]

Google DeepMind: What is it, how does it work and should you be scared?

Image result for Google DeepMind: What is it, how does it work and should you be scared?

Today concludes the five ‘Go’ matches played by AlphaGo, an AI system built by DeepMind and South Korean champion, Lee Sedol. AlphaGo managed to win the series of games 4-1.

‘Go’ is a strategy-led board game in which two players aim to gather and surround the most territory on the board. The game is said to require a certain level of intuition and be considerably more complex than Chess. The first three games were won by AlphaGo with Sedol winning the fourth round, but still unable to claim back a victory.

What is DeepMind?

Google DeepMind is an artificial intelligence division within Google that was created after Google bought University College London spinout, DeepMind, for a reported £400 million in January 2014. 

The division, which employs around 140 researchers at its lab in a new building at Kings Cross, London, is on a mission to solve general intelligence and make machines capable of learning things for themselves. It plans to do this by creating a set of powerful general-purpose learning algorithms that can be combined to make an AI system or “agent”. 

Suleyman explains

These are systems that learn automatically. They’re not pre-programmed, they’re not handcrafted features. We try to provide a large set of raw information to our algorithms as possible so that the systems themselves can learn the very best representations in order to use those for action or classification or predictions.

The systems we design are inherently general. This means that the very same system should be able to operate across a wide range of tasks.

That’s why we’ve started as we have with the Atari games. We could have done lots of really interesting problems in narrow domains had we spent time specifically hacking our tools to fit the real world problems – that could have been very, very valuable. 

Instead we’ve taken the principal approach of starting on tools that are inherently general. 

AI has largely been about pre-programming tools for specific tasks: in these kinds of systems, the intelligence of the system lies mostly in the smart human who programmed all of the intelligence into the smart system and subsequently these are of course rigid and brittle and don’t really handle novelty very well or adapt to new settings and are fundamentally very limited as a result.

We characterise AGI [artificial general intelligence] as systems and tools which are flexible and adaptive and that learn. 

We use the reinforcement learning architecture which is largely a design approach to characterise the way we develop our systems. This begins with an agent which has a goal or policy that governs the way it interacts with some environment. This environment could be a small physics domain, it could be a trading environment, it could be a real world robotics environment or it could be a Atari environment. The agent says it wants to take actions in this environment and it gets feedback from the environment in the form of observations and it uses these observations to update its policy of behaviour or its model of the world. 

How does it work? 

The technology behind DeepMind is complex to say the least but that didn’t stop Suleyman from trying to convey some of the fundamental deep learning principles that underpin it. The audience – a mixture of software engineers, AI specialists, startups, investors and media – seemed to follow. 

Suleyman explains 

You’ve probably heard quite a bit about deep learning. I’m going to give you a very quick high-level overview because this is really important to get intuition for how these systems work and what they basically do. 

These are hierarchical based networks initially conceived back in the 80s but recently resuscitated by a bunch of really smart guys from Toronto and New York.

The basic intuition is that at one end we take the raw pixel data or the raw sensory stream data of things we would like to classify or recognise. 

This seems to be a very effective way of learning to find structure in very large data sets. Right at the very output we’re able to impose on the network some requirement to produce some set of labels or classifications that we recognise and find useful as humans. 

How is DeepMind being tested? 

DeepMind found a suitably quirky way to test what its team of roughly 140 people have been busy building. 

The intelligence of the DeepMind’s systems was put through its paces by an arcade gaming platform that dates back to the 1970s. 

Suleyman demoed DeepMind playing one of them during his talk – space invaders. In his demo he illustrated how a DeepMind agent learns how to play the game with each go it takes. 

Suleyman explains 

We use the Atari test bed to develop and test and train all of our systems…or at least we have done so far. 

There is somewhere on the magnitude of 100 different Atari games from the 70s and 80s. 

The agents only get the raw pixel inputs and the score so this is something like 30,000 inputs per frame. They’re wired up to the action buttons but they’re not really told what the action buttons do so the agent has to discover what these new tools of the real world actually mean and how they can utilise value for the agent. 

The goal that we give them is very simply to maximise score; it gets a 1 or a 0 when the score comes in, just as a human would. 

Everything is learned completely from scratch – there’s absolutely zero pre-programmed knowledge so we don’t tell the agent these are Space Invaders or this is how you shoot. It’s really learnt from the raw pixel inputs. 

For every set of inputs the agent is trying to assess which action is optimal given that set of inputs and it’s doing that repeatedly over time in order to optimise some longer term goal, which in Atari’s sense, is to optimise score. This is one agent with one set of parameters that plays all of the different games.

Live space invaders demo

An agent playing space invaders before training struggles to hide behind the orange obstacles, it’s firing fairly randomly. It seems to get killed all of the time and it doesn’t really know what to do in the environment. 

After training, the agent learns to control the robot and barely loses any bullets. It aims for the space invaders that are right at the top because it finds those the most rewarding. It barely gets hit; it hides behind the obstacles; it can make really good predictive shots like the one on the mothership that came in at the top there. 

As those of you know who have played this game, it sort of speeds up towards the end and so the agent has to do a little bit more planning and predicting than it had done previously so as you can see there’s a really good predictive shot right at the end there. 

100 games vs 500 games

The agent doesn’t really know what the paddle does after 100 games, it sort of randomly moves it from one side to the other. Occasionally it accidentally hits the ball back and finds that to be a rewarding action. It learns that it should repeat that action in order to get reward. 

After about 300 games it’s pretty good and it basically doesn’t really miss. 

But then after about 500 games, really quite unexpectedly to our coders, the agent learns that the optimal strategy is to tunnel up the sides and then send them all around the back to get maximum score with minimum effort – this was obviously very impressive to us. 

We’ve now achieved human performance in 49/57 games that we’ve tested on and this work was recently rewarded with a front cover of Nature for our paper that we submitted so we were very proud of that. 

How is it being used across Google? 

Google didn’t buy DeepMind for nothing. Indeed, it’s using certain DeepMind algorithms to make many of its best-known products and services smarter than they were previously. 

Suleyman explains

Our deep learning tool has now been deployed in many environments, particularly across Google in many of our production systems.

In image recognition, it was famously used in 2012 to achieve very accurate recognition on around a million images with about 16 percent error rate. Very shortly after that it was reduced dramatically to about 6 percent and today we’re at about 5.5 percent. This is very much parable with the human level of ability and it’s now deployed in Google+ Image Search and elsewhere in Image Search across the company.

As you can see on Google Image Search on G+, you’re now able to type a word into the search box and it will recall images from your photographs that you’ve never actually hand labelled yourself. 

We’ve also used it for text and scription. We use it to identify text on shopfronts and maybe alert people to a discount that’s available in a particular shop or what the menu says in a given restaurant. We do that with an extremely high level of accuracy today. It’s being used in Local Search and elsewhere across the company. 

We also use the same core system across Google for speech recognition. It trains roughly in less than five days. In 2012 it delivered a 30 percent reduction in error rate against the existing old school system. This was the biggest single improvement in speech recognition in 20 years, again using the same very general deep learning system across all of these. 

Across Google we use what we call Tool AI or Deep Learning Networks for fraud detection, spam detection, hand writing recognition, image search, speech recognition, Street View detection, translation. 

Sixty handcrafted rule-based systems have now been replaced with deep learning based networks. This gives you a sense of the kind of generality, flexibility and adaptiveness of the kind of advances that have been made across the field and why Google was interested in DeepMind.

GoogleDeepmind, AlphaGo and Go: the discussion

Techworld editors discuss the implications of AlphaGo beating all the humans.

The number of scientists and world-famous entrepreneurs speaking out on the potential dangers of AI is increasing week-by-week, with renowned physicist Stephen Hawking and PayPal billionaire Elon Musk being two of the most outspoken anti-AI advocates. 

The pair, along with several others including Bill Gates and Sky cofounder Jaan Tallinn, believe that machines will soon become more intelligent than humans, just as they do in recent Hollywood blockbuster Ex Machina. 

Despite this, Google is keen to develop its AI algorithms as much as possible in order to improve its offerings and boost its profits. 

Suleyman tried to put people’s minds at ease and explain the logic behind all the hype. 

Suleyman explains

Over the last 18 months or so, AI breakthroughs have, I think, created a sense of anxiety or in some cases hype around the potential long term direction of the field. 

This of course is not least induced by Elon [Musk] who recently Tweeted that we need to be super careful with AI because it’s “potentially more dangerous than nukes” and that’s obviously backed up by various publications including Nick Bostrom’s – all culminating in this kind of sense that AI has the potential to end all human kind. 

If you didn’t really pay attention to the field and all you did was read, as I think the vast majority of people do, descriptions of the kind of work that we do on the web then you could be forgiven for believing that AI is actually about this. Whether it’s Terminator coming to blow us up or societies of AIs or mad scientists looking to create quite perverted women robots.

This narrative has somehow managed to dominate the entire landscape, which I think we find really quite remarkable. 

It’s true that AI has in some sense really arrived. This isn’t just a summer. These are very concrete production breakthroughs that really do make a big different but it’s also sad how quickly we adapt to this new reality. We rarely take time to acknowledge the magic and the potential of these advances and the kind of good that they can bring. In some sense, the narrative has shifted from isn’t it terrible that AI has been such a failure to isn’t it terrible that AI has been such a success. 

Just to address directly this question of existential risk. Our perspective on this is that it’s become a real distraction from the core ethics and safety issues and that it’s completely overshadowed the debate. 

The way we think about AI is that it’ll be a hugely powerful tool that we control and direct whose capabilities we limit, just as we do with any other tool that we have in the world around us, whether they’re washing machines or tractors. 

[Source:- Techworld]

Windows 10 Will Not Be Free For Enterprise?

Microsoft’s free upgrade offer for Windows 10 expired on July 29. Windows 10 Home currently costs $120, Windows 10 Pro costs $200 and upgrading from Windows 10 Home to Pro costs $100. Enterprise customers pay by volume pricing.

Back in January of 2015, Microsoft’s Terry Myerson revealed that Windows 10 will be a free upgrade for those using Windows 7, Windows 8/8.1 and Windows Phone 8.1. These customers will be able to get the upcoming operating system for free within one year after its release.

Myerson also branded Windows 10 as a “Windows-As-A-Service” platform given that it will be kept current for its supported lifetime. That’s a big step for Microsoft and good news for small businesses. Unfortunately, Windows 7 Enterprise and Windows 8/8.1 Enterprise are not part of the free Windows 10 upgrade program.

“Windows 7 Enterprise and Windows 8/8.1 Enterprise are not included in the terms of free Windows 10 Upgrade offer we announced last week, given active Software Assurance customers will continue to have rights to upgrade to Windows 10 enterprise offerings outside of this offer – while also benefiting from the full flexibility to deploy Windows 10 using their existing management infrastructure,” wrote Microsoft’s Jim Alkove in a recent Windows blog update.

Alkove went on to explain that with Windows 10, the company is taking a new approach to enterprise customers called “Current Branch for Business” and “Long-Term Servicing Branch.” For the latter, businesses can receive enterprise-level support by way of critical updates and the latest security fixes, but their Windows 10 devices will not automatically retrieve new features within the next five years after the platform’s release.

As for “Current Branch for Business,” businesses can install new features after they’re released to consumers and tested for compatibility on Windows 10 devices. Meanwhile, said devices will receive security updates as they’re released by Microsoft.

“This gives IT departments time to start validating updates in their environments the day changes are shipped broadly to consumers, or in some cases earlier, if they have users enrolled in the Windows Insider Program,” Alkove said. “By the time Current branch for Business machines are updated, the changes will have been validated by millions of Insiders, consumers and customers’ internal test processes for several months, allowing updates to be deployed with this increased assurance of validation.”

Alkove added that businesses can choose to get their Windows 10 updates through WSUS, which will allow IT departments to manually distribute features, or automatically through Windows Update.

Towards the end of the post, Alkove said that the first Windows 10 Long Term Servicing branch will be released alongside the consumer version of Windows 10 later this summer. He also expects to see businesses take a “mixed” approach in how Windows 10 devices will be kept current. Different users and systems will likely have different update schedules.

Businesses are encouraged to join the Windows Insider Program, download the latest Windows 10 Technical Preview, and provide feedback.

 

 
[Source:- Tomsitpro]