3 Reasons Your Website Will Never Be Finished

Image result for 3 Reasons Your Website Will Never Be Finished

Say, I’ve got some news for you: Your company’s website will never be finished. You will never sit back, breathe a sigh of relief, and say, “Finally! We’ve got this thing wrapped up; now we can move onto other things.”

That is, this will never happen if you’re doing all you should with your website. And this adds up to some good news because if you’re constantly updating your site, you’ll develop an advantage over your competitors who aren’t.


Here are three reasons you should never stop working on your website:

1. Web design trends are evolving. Compare websites designed within the past few months with those designed a few years ago, and you’ll notice some differences. Web design trends can sometimes be mere fads, but often they are driven by changes in technology. Two modern trends in web design are flat design and responsive design.

Gradients, drop shadows, bevels and elements designed to resemble real objects have no place in flat web design. Proponents of flat design eschew the fancy in favor of simplicity, clean lines, bold colors and a focus on content and usability. Flat design also means cleaner code, faster-loading pages (good for SEO) and greater adaptability, which factors into the next trend.

Responsive web design means that a site responds to the various sizes of screens that people use to view websites. Today someone might look at a site on a desktop monitor, a tablet or a smartphone, which come in different sizes.

Years ago, most companies had either a separate mobile site that would be displayed for users on a tablet or smartphone and a full website that would appear for desktop users. But this strategy was less than ideal because those websites were geared toward only two screen sizes. Responsive websites take into account all screen sizes and adjust to provide an optimal experience for every user. This leads to greater website-visitor retention. As a result, companies today are ditching the dedicated desktop and mobile sites in favor of a single, responsive website. (FlatInspire.com displays websites that are both flat and responsive.)


2. Consumer preferences are changing. Customers expect something different from your website now than did two, five or 10 years ago. When high-speed internet became widely available, users started to anticipate rich content, such as high-resolution photography and HD videos. As desktop screens grew larger and wider, consumers looked for sites that would take advantage of the additional real estate.

This year the number of smartphone users worldwide is expected to surpass 1.75 billion, prompting a toward a move toward long, vertical websites that scroll.

Today’s consumers don’t want to waste time. Everyone is busy and wants to get to the point as efficiently as possible. Many companies have understood this to mean that content should be clear and concise.

While brevity may the the soul of wit, consumers don’t always want webpages short on content. What they want is high-quality content that delivers real value. Sometimes the best way to do this is through long-form content. Basecamp performed an experiment with long-form content on its home page and found signups for its project management software rose 37.5 percent.

Design agency Teehan+Lax embraces long-form content in its portfolio section, in a post about working with client Krush. The segment delivers value, by helping potential clients understand what the process of working with the company would be like. Long-form content is also good for  SEO.


3. Search engine optimization rules. The premise of SEO is that if a company sells widgets and its site shows up No. 1 in a Google search for the term “widgets,” then viewers will be drawn to that corporate site. But it may not be the only company desiring to market widgets. Therefore, the company’s task is to convince Google that when someone searches for widgets, any user arriving at the company’s website will find it especially appropriate for the search term. If users aren’t happy with Google’s search results, that’s bad for Google.

It used to be that a lot of SEO firms would trick Google into sending traffic to their clients’ websites. But Google employs thousands of people with doctorates to systematically filter out search engine spam. Google’s search algorithm updates like Panda, Penguin, and Hummingbird have forced websites to provide real value to visitors or see their rankings in the search engines fall and traffic dry up. Although some aspects of SEO can be done just once (such as ensuring that you have a credible web-hosting firm and solid code on your website so that it loads quickly), here are some ongoing activities that companies can engage in to get good search-engine rankings and drive traffic to their site:

  • Attract inbound links from high quality, relevant websites.
  • Create content that people enjoy reading and want to share.
  • Update the corporate website frequently with high-quality content.
  • Keep up with design trends to make the website fresh and attractive.

Creating new content and attracting links can mean updating a blog and press section, or developing valuable informational resource sections like tips, FAQs; or articles. It also helps for the company to become an expert in your field and engage in online PR. And yes, even guest blog posting is still a viable tactic for link building, as long as it’s of high quality.



[Source:- Entrepreneur]

Marshmallow found to be running only on 18.7% of Android devices

With Android 7.0 Nougat preparing a big arrival in the coming weeks, what does last year’s Android 6.0 Marshmallow look like? Well, not so good according to the Android distribution figures posted by Google. It is shown that the late 2015 release of Android is only found on 18.7% of all Android devices out there. This is quite disappointing, especially since it’s been nearly 12 months since the OS was publicly released.

While we don’t have actual numbers, the percentages give us a good idea of the fragmentation in the Android platform. Unsurprisingly, Android 7.0 hasn’t even made it to the list as it is running on less than 0.1% of the devices. This is not much of a surprise as the Nougat update is yet to go official commercially and is only available in beta builds.

Android 4.4 KitKat is still on 27.7% of devices while Android 5.0 and 5.1 collectively make up for 35% of the Android population. It is hoped that devices that are currently running Lollipop will eventually transition into Marshmallow and increase those figures somewhat. But going by the current trend, we don’t see Nougat seeing a lot of marketshare even six months down the line.

While Google is quite prompt with its updates, manufacturers are often lagging behind. This is perhaps the primary cause for so many Android devices being left behind and we hope Google comes up with a solution to this sooner rather than later.



[Source:- Techrader]

Why you should be using pattern libraries


Have you heard of pattern libraries, style guides, component libraries, design patterns or UI toolkits? Don’t worry if you’re confused or don’t know the differences. Here’s a secret—most people in the design industry are also a little confused.

With all these terms flying around it can quickly become overwhelming. But rest assured it’s actually much less complex than you might first think.

All these different terms can be grouped into two different categories:


These are brand guidelines for a website. They contain the logo, colours and typography. A style guide takes all the relevant parts of the brand guidelines and places them together.


All these terms refer to the same thing.

They are a collection of reusable components that make up a website. Pattern libraries (as I’ll refer to them from now) are a way to represent everything that makes up a website. This includes the layout, structure and everything that is contained within them.

On an eCommerce website this would include a product item, a review, star rating, quantity, navigation, tables and buttons, to name a few. Each of these are called a component.

So, a pattern library is a collection of components that make up the website.


Websites require both a style guide and pattern library. They’ll often live together which might be where a lot of the confusion comes from.

Style guides apply branding while pattern libraries apply layout and structure. For example, the style guide for Levis would dictate the website should use red with a heavy font but the pattern library would dictate a product listing item should contain an image, title and price.




[Source:- webdesignerdepot]

Nintendo Switch Joy-Con Charging Grip Will Be Sold Separately

Nintendo Switch Joy-Con Charging Grip Will Be Sold Separately

It’s come to light that the Joy-Con Grip, included in the $300 Nintendo Switch launch bundle, doesn’t let you charge your controllers while you play. Instead, you will need a separately sold Joy-Con Charging Grip, which costs $30 (about Rs. 2,000).

The standard Joy-Con Grip is merely a matte finish hanger for your pair of Joy-Con, which gives you the more traditional feel of a controller. It’s not the same thing as the one sold separately, unlike what you’d naturally assume.

The hint lies in the name – the one sold standalone is called a Joy-Con Charging Grip, which includes a USB slot unlike the grip sold with the Switch. That means the premium translucent $30 addition allows you to charge your pair of Joy-Con while your Switch device is docked into its station.

Not bundling this with the Switch seems like a bit of a cheap move on Nintendo’s part, which will most likely justify its omission to keep the base cost low. For what it’s worth, the Joy-Con has a battery life of around 20 hours, though it will get shorter with use.

It does take three and a half hours to charge them back though, so if you don’t get the Charging Grip, you won’t be able to play during that time in docked mode.

This also ties into the bigger issue of accessory cost that’s already piling up with the Switch – a Switch Pro Controller, more ergonomic and longer-lasting, costs $70 (about Rs. 4,800). An additional pair of Joy-Con, should you need one, will also run you $80 (about Rs. 5,400).

Nintendo is also not bundling a single game with the console, in a contrast to how Wii came with Wii Sports. Two launch titles that have a price tag – The Legend of Zelda: Breath of the Wild, and 1, 2, Switch – come in at $60 and $50 respectively (about Rs. 4,000 and Rs. 3,400).



[Source:- gadgets360]

Google DeepMind: What is it, how does it work and should you be scared?

Image result for Google DeepMind: What is it, how does it work and should you be scared?

Today concludes the five ‘Go’ matches played by AlphaGo, an AI system built by DeepMind and South Korean champion, Lee Sedol. AlphaGo managed to win the series of games 4-1.

‘Go’ is a strategy-led board game in which two players aim to gather and surround the most territory on the board. The game is said to require a certain level of intuition and be considerably more complex than Chess. The first three games were won by AlphaGo with Sedol winning the fourth round, but still unable to claim back a victory.

What is DeepMind?

Google DeepMind is an artificial intelligence division within Google that was created after Google bought University College London spinout, DeepMind, for a reported £400 million in January 2014. 

The division, which employs around 140 researchers at its lab in a new building at Kings Cross, London, is on a mission to solve general intelligence and make machines capable of learning things for themselves. It plans to do this by creating a set of powerful general-purpose learning algorithms that can be combined to make an AI system or “agent”. 

Suleyman explains

These are systems that learn automatically. They’re not pre-programmed, they’re not handcrafted features. We try to provide a large set of raw information to our algorithms as possible so that the systems themselves can learn the very best representations in order to use those for action or classification or predictions.

The systems we design are inherently general. This means that the very same system should be able to operate across a wide range of tasks.

That’s why we’ve started as we have with the Atari games. We could have done lots of really interesting problems in narrow domains had we spent time specifically hacking our tools to fit the real world problems – that could have been very, very valuable. 

Instead we’ve taken the principal approach of starting on tools that are inherently general. 

AI has largely been about pre-programming tools for specific tasks: in these kinds of systems, the intelligence of the system lies mostly in the smart human who programmed all of the intelligence into the smart system and subsequently these are of course rigid and brittle and don’t really handle novelty very well or adapt to new settings and are fundamentally very limited as a result.

We characterise AGI [artificial general intelligence] as systems and tools which are flexible and adaptive and that learn. 

We use the reinforcement learning architecture which is largely a design approach to characterise the way we develop our systems. This begins with an agent which has a goal or policy that governs the way it interacts with some environment. This environment could be a small physics domain, it could be a trading environment, it could be a real world robotics environment or it could be a Atari environment. The agent says it wants to take actions in this environment and it gets feedback from the environment in the form of observations and it uses these observations to update its policy of behaviour or its model of the world. 

How does it work? 

The technology behind DeepMind is complex to say the least but that didn’t stop Suleyman from trying to convey some of the fundamental deep learning principles that underpin it. The audience – a mixture of software engineers, AI specialists, startups, investors and media – seemed to follow. 

Suleyman explains 

You’ve probably heard quite a bit about deep learning. I’m going to give you a very quick high-level overview because this is really important to get intuition for how these systems work and what they basically do. 

These are hierarchical based networks initially conceived back in the 80s but recently resuscitated by a bunch of really smart guys from Toronto and New York.

The basic intuition is that at one end we take the raw pixel data or the raw sensory stream data of things we would like to classify or recognise. 

This seems to be a very effective way of learning to find structure in very large data sets. Right at the very output we’re able to impose on the network some requirement to produce some set of labels or classifications that we recognise and find useful as humans. 

How is DeepMind being tested? 

DeepMind found a suitably quirky way to test what its team of roughly 140 people have been busy building. 

The intelligence of the DeepMind’s systems was put through its paces by an arcade gaming platform that dates back to the 1970s. 

Suleyman demoed DeepMind playing one of them during his talk – space invaders. In his demo he illustrated how a DeepMind agent learns how to play the game with each go it takes. 

Suleyman explains 

We use the Atari test bed to develop and test and train all of our systems…or at least we have done so far. 

There is somewhere on the magnitude of 100 different Atari games from the 70s and 80s. 

The agents only get the raw pixel inputs and the score so this is something like 30,000 inputs per frame. They’re wired up to the action buttons but they’re not really told what the action buttons do so the agent has to discover what these new tools of the real world actually mean and how they can utilise value for the agent. 

The goal that we give them is very simply to maximise score; it gets a 1 or a 0 when the score comes in, just as a human would. 

Everything is learned completely from scratch – there’s absolutely zero pre-programmed knowledge so we don’t tell the agent these are Space Invaders or this is how you shoot. It’s really learnt from the raw pixel inputs. 

For every set of inputs the agent is trying to assess which action is optimal given that set of inputs and it’s doing that repeatedly over time in order to optimise some longer term goal, which in Atari’s sense, is to optimise score. This is one agent with one set of parameters that plays all of the different games.

Live space invaders demo

An agent playing space invaders before training struggles to hide behind the orange obstacles, it’s firing fairly randomly. It seems to get killed all of the time and it doesn’t really know what to do in the environment. 

After training, the agent learns to control the robot and barely loses any bullets. It aims for the space invaders that are right at the top because it finds those the most rewarding. It barely gets hit; it hides behind the obstacles; it can make really good predictive shots like the one on the mothership that came in at the top there. 

As those of you know who have played this game, it sort of speeds up towards the end and so the agent has to do a little bit more planning and predicting than it had done previously so as you can see there’s a really good predictive shot right at the end there. 

100 games vs 500 games

The agent doesn’t really know what the paddle does after 100 games, it sort of randomly moves it from one side to the other. Occasionally it accidentally hits the ball back and finds that to be a rewarding action. It learns that it should repeat that action in order to get reward. 

After about 300 games it’s pretty good and it basically doesn’t really miss. 

But then after about 500 games, really quite unexpectedly to our coders, the agent learns that the optimal strategy is to tunnel up the sides and then send them all around the back to get maximum score with minimum effort – this was obviously very impressive to us. 

We’ve now achieved human performance in 49/57 games that we’ve tested on and this work was recently rewarded with a front cover of Nature for our paper that we submitted so we were very proud of that. 

How is it being used across Google? 

Google didn’t buy DeepMind for nothing. Indeed, it’s using certain DeepMind algorithms to make many of its best-known products and services smarter than they were previously. 

Suleyman explains

Our deep learning tool has now been deployed in many environments, particularly across Google in many of our production systems.

In image recognition, it was famously used in 2012 to achieve very accurate recognition on around a million images with about 16 percent error rate. Very shortly after that it was reduced dramatically to about 6 percent and today we’re at about 5.5 percent. This is very much parable with the human level of ability and it’s now deployed in Google+ Image Search and elsewhere in Image Search across the company.

As you can see on Google Image Search on G+, you’re now able to type a word into the search box and it will recall images from your photographs that you’ve never actually hand labelled yourself. 

We’ve also used it for text and scription. We use it to identify text on shopfronts and maybe alert people to a discount that’s available in a particular shop or what the menu says in a given restaurant. We do that with an extremely high level of accuracy today. It’s being used in Local Search and elsewhere across the company. 

We also use the same core system across Google for speech recognition. It trains roughly in less than five days. In 2012 it delivered a 30 percent reduction in error rate against the existing old school system. This was the biggest single improvement in speech recognition in 20 years, again using the same very general deep learning system across all of these. 

Across Google we use what we call Tool AI or Deep Learning Networks for fraud detection, spam detection, hand writing recognition, image search, speech recognition, Street View detection, translation. 

Sixty handcrafted rule-based systems have now been replaced with deep learning based networks. This gives you a sense of the kind of generality, flexibility and adaptiveness of the kind of advances that have been made across the field and why Google was interested in DeepMind.

GoogleDeepmind, AlphaGo and Go: the discussion

Techworld editors discuss the implications of AlphaGo beating all the humans.

The number of scientists and world-famous entrepreneurs speaking out on the potential dangers of AI is increasing week-by-week, with renowned physicist Stephen Hawking and PayPal billionaire Elon Musk being two of the most outspoken anti-AI advocates. 

The pair, along with several others including Bill Gates and Sky cofounder Jaan Tallinn, believe that machines will soon become more intelligent than humans, just as they do in recent Hollywood blockbuster Ex Machina. 

Despite this, Google is keen to develop its AI algorithms as much as possible in order to improve its offerings and boost its profits. 

Suleyman tried to put people’s minds at ease and explain the logic behind all the hype. 

Suleyman explains

Over the last 18 months or so, AI breakthroughs have, I think, created a sense of anxiety or in some cases hype around the potential long term direction of the field. 

This of course is not least induced by Elon [Musk] who recently Tweeted that we need to be super careful with AI because it’s “potentially more dangerous than nukes” and that’s obviously backed up by various publications including Nick Bostrom’s – all culminating in this kind of sense that AI has the potential to end all human kind. 

If you didn’t really pay attention to the field and all you did was read, as I think the vast majority of people do, descriptions of the kind of work that we do on the web then you could be forgiven for believing that AI is actually about this. Whether it’s Terminator coming to blow us up or societies of AIs or mad scientists looking to create quite perverted women robots.

This narrative has somehow managed to dominate the entire landscape, which I think we find really quite remarkable. 

It’s true that AI has in some sense really arrived. This isn’t just a summer. These are very concrete production breakthroughs that really do make a big different but it’s also sad how quickly we adapt to this new reality. We rarely take time to acknowledge the magic and the potential of these advances and the kind of good that they can bring. In some sense, the narrative has shifted from isn’t it terrible that AI has been such a failure to isn’t it terrible that AI has been such a success. 

Just to address directly this question of existential risk. Our perspective on this is that it’s become a real distraction from the core ethics and safety issues and that it’s completely overshadowed the debate. 

The way we think about AI is that it’ll be a hugely powerful tool that we control and direct whose capabilities we limit, just as we do with any other tool that we have in the world around us, whether they’re washing machines or tractors. 

[Source:- Techworld]

Windows 10 Will Not Be Free For Enterprise?

Microsoft’s free upgrade offer for Windows 10 expired on July 29. Windows 10 Home currently costs $120, Windows 10 Pro costs $200 and upgrading from Windows 10 Home to Pro costs $100. Enterprise customers pay by volume pricing.

Back in January of 2015, Microsoft’s Terry Myerson revealed that Windows 10 will be a free upgrade for those using Windows 7, Windows 8/8.1 and Windows Phone 8.1. These customers will be able to get the upcoming operating system for free within one year after its release.

Myerson also branded Windows 10 as a “Windows-As-A-Service” platform given that it will be kept current for its supported lifetime. That’s a big step for Microsoft and good news for small businesses. Unfortunately, Windows 7 Enterprise and Windows 8/8.1 Enterprise are not part of the free Windows 10 upgrade program.

“Windows 7 Enterprise and Windows 8/8.1 Enterprise are not included in the terms of free Windows 10 Upgrade offer we announced last week, given active Software Assurance customers will continue to have rights to upgrade to Windows 10 enterprise offerings outside of this offer – while also benefiting from the full flexibility to deploy Windows 10 using their existing management infrastructure,” wrote Microsoft’s Jim Alkove in a recent Windows blog update.

Alkove went on to explain that with Windows 10, the company is taking a new approach to enterprise customers called “Current Branch for Business” and “Long-Term Servicing Branch.” For the latter, businesses can receive enterprise-level support by way of critical updates and the latest security fixes, but their Windows 10 devices will not automatically retrieve new features within the next five years after the platform’s release.

As for “Current Branch for Business,” businesses can install new features after they’re released to consumers and tested for compatibility on Windows 10 devices. Meanwhile, said devices will receive security updates as they’re released by Microsoft.

“This gives IT departments time to start validating updates in their environments the day changes are shipped broadly to consumers, or in some cases earlier, if they have users enrolled in the Windows Insider Program,” Alkove said. “By the time Current branch for Business machines are updated, the changes will have been validated by millions of Insiders, consumers and customers’ internal test processes for several months, allowing updates to be deployed with this increased assurance of validation.”

Alkove added that businesses can choose to get their Windows 10 updates through WSUS, which will allow IT departments to manually distribute features, or automatically through Windows Update.

Towards the end of the post, Alkove said that the first Windows 10 Long Term Servicing branch will be released alongside the consumer version of Windows 10 later this summer. He also expects to see businesses take a “mixed” approach in how Windows 10 devices will be kept current. Different users and systems will likely have different update schedules.

Businesses are encouraged to join the Windows Insider Program, download the latest Windows 10 Technical Preview, and provide feedback.


[Source:- Tomsitpro]

This is Why Your Website Needs to be Optimized for Mobile


I’m not going to start this post with a story about how every time I see my friends and family, we stick our noses in our smartphones and ignore one another. I’m not even going to give you the “gone are the days when mobile device users could be ignored” line.

You already know all this. You know people are glued to their phones. You know that this includes your customers. You know we search for local businesses on smartphones. Maybe you even know that 82% of smartphone users look up information on their phones while they are in a store contemplating a purchase decision.

And you’ve probably been to a website on your phone only to leave frustrated because the site wasn’t optimized for your device. But, did you know that despite the fact that 56% of traffic to the leading websites comes from mobile devices, only 50% of small business websites are optimized for mobile?

Seems counter-productive, doesn’t it? And only 60% of small businesses even have a website (but that’s a story for another day).

What is mobile optimization?

Mobile optimization is a digital marketing buzzword that means a website is designed to adapt to the device the consumer is using to search for your business.

Why do small businesses need mobile-optimized websites?

Well, both search engines and the people who use mobile devices to search for local businesses favor mobile-optimized websites.

Mobile-Friendly for Humans

Mobile Internet usage is 75% and growing. Besides, imagine it from a consumer’s standpoint. If I’m on my phone searching for a place to eat near me, and I click a restaurant’s website only to find the menu is tiny and the site is difficult to navigate, I’m going to move on to the next business.

I’m not the only one. In a survey, 91% of consumers indicated that they’d turn to a business’s competitor if the business’s website wasn’t optimized for mobile devices.

Seriously, mobile optimization is important:

  • 56% of on-the-go searches have local intent
  • 60% of consumers use mobile exclusively to make purchase decisions

And mobile optimization isn’t just important for consumers. Search engines favor mobile-optimized websites too.

Mobile-Friendly for Search Engines

Of course, you want your business to rank high in searches – but did you know that search engines will penalize websites if they aren’t mobile-friendly? In April 2015, Google updated its search ranking algorithm to reward mobile-friendly websites for mobile searchers. Websites that weren’t mobile-friendly eventually saw a decrease in search rankings while the mobile-optimized sites tended to see an increase in rankings.

And in May 2016, Google tweaked this algorithm again so that mobile-friendly websites got an additional boost in search rankings. And Google and Bing both tell searchers whether or not a website is mobile-friendly in search results on mobile devices.

So, if your website isn’t easy for consumers to use on their smartphones, search engines will lower your mobile search rankings, making it harder for consumers to find you.

How does Google define mobile-friendly? Here’s the definition from Google’s Mobile Terms Glossary:

“Usable on a mobile device (e.g. the site doesn’t slow down the phone, doesn’t scroll horizontally in a vertical orientation, doesn’t use unavailable plugins like Flash). It’s designed for the form factor of the device and its display.”

Mobile-optimized websites need to have font that’s large enough for consumers to read as well as clickable links or buttons spaced far enough apart that users don’t have trouble clicking the right button. And there is so much more that Google takes into account when determining if a website is mobile-friendly.

In fact, Google has a list of 25 mobile site design principles.

And Bing has similar criteria for judging the mobile-friendliness of a page. Factors include:

  • Spacing of links and buttons
  • Text size
  • Whether or not the page fits the width of the screen
  • Device compatibility

Bing and Google both have mobile-friendly testing tools for websites. Just copy and paste your website’s URL and click “analyze” to see how your site stacks up.

Google recently updated its mobile-friendly testing tool. You can also check out Bing’s here.

How often should you refresh the look/content of your site?

While fresh content is good for search engines, it isn’t usually possible for small businesses to update their websites or create new blog posts frequently.

There are a few reasons you might want to update your website, but unless it’s outdated or difficult for consumers to use, it might not be necessary.

However, if your business is undergoing a big branding change or you’re moving or adding locations, it’s definitely worth considering an update to your site, along with content that will reflect those changes.

And if your site still isn’t mobile-friendly, that’s another reason you’ll want to update it. Don’t lose customers (and money) to competitors just because people can’t navigate your site on their smartphones.



[Source: Socialmediatoday]

Microsoft’s Bots could be its biggest contribution to computing since Windows

Microsoft's Bots could be its biggest contribution to computing since Windows

On stage at its annual Build conference keynote, Microsoft CEO Satya Nadella painted a picture of our lives being made easier with bots, intelligent agents that live within apps and services. But, would that life be much better, or less connected than it already is – or both?

Nadella and team’s vision for conversational computing comes just a week after their first public experiment in the field, Tay, came crashing down in a spectacular display of human depravity. Not exactly the best argument for a world run by bots.

The newly-appointed executive addressed the Twitter chat bot experiment head on during the March 30 Build 2016 keynote with a three-fold plan for bots that he believes are the new apps.


To Nadella, so long as bots and the digital assistants that use them are built with the intention to augment human ability and experience, with trustworthiness (privacy, transparency, security) and with inclusion and respectfulness in mind, we’ll be OK.

Or, at the very least, we’ll avoid another Tay scenario.

And, on paper, that generally checks out. Of course, the bots that Microsoft envisions aren’t necessarily accessible by the masses all at once, but individuals through specific communication programs or through assistants, like Cortana.

Still, Tay was demonstrative of the sheer power that such intelligent, semi-autonomous software can possess. But I’m worried about another facet of these bots’ power.


Do we need another crutch to connect?

That’s my simple question to everyone: are the lay people of the world ready for such power, just as we’re learning empathy on the internet? But, I’ll follow that up with another one.

What will that power do to a society that’s more connected than ever yet whose people struggle to meaningfully connect with one another more than ever?

Take Microsoft’s demonstration of Cortana using bots to facilitate uniting with an old friend in Dublin, Ireland on an upcoming trip. Looking at it one way: Cortana and its squad of bots just helped someone connect with her old friend.

But, try and look at it this way: wouldn’t that person have remembered that old friend without Cortana’s help? Americans don’t visit Ireland every day, after all. Or, would she not have, for the effects of “connected” tech have already created a crutch for her to lean on to facilitate human interaction?

I like to call this “The Facebook Effect.” How many of your friends and family members’ birthdays do you actually remember now that Facebook reminds you? (I won’t even bother counting myself.)

What happens when we apply similar use cases to far more powerful pieces of technology? My guess is that it won’t be long before we rely on bots to remind us to connect with one another much less order a pizza.

At that point, I don’t know how much bots are helping so much as hindering our ability to meaningfully or earnestly connect with one another. In the above Dublin scenario, the woman didn’t even reach out to her friend on her own – Cortana did it for her.


Bots for tedium, brains for relationships

Now, don’t mistake: I couldn’t be more excited for for bots to intelligently update my calendar and remind me that I’m on deadline for that laptop review. But, I’d rather handle communicating with other humans on my own, thanks.

Technology by its very definition makes life easier, we’d be nothing without it, but just how much do we want to lean on technology to foster human relationships?

As we enter this new phase of automation, we could do with asking ourselves that question more often.

[Source:- Techrader]

Sick of those unwanted wedding pics on Facebook? This app may be your answer.


How many times have you been sick of unwanted, annoying photo posts from friends that clog your social media feed?

And how many times has that driven you to unfollow, or even deactivate your account, only to realize that unfollowing is no foolproof measure of filtering trash, and deactivating means you miss out on the good stuff too — sending you crawling back to the big bad mess of posts and updates?

Photos on Facebook have been known to generate 39% more likes than text posts or even videos, and account for 93% of the most engaging posts.

So, if there’s no escaping pictures, is there a way to escape the rubbish ones at least?

Bangalore-based Galleri-5 wants to be the answer to those woes. The startup, which is all set to go live by the end of March, calls itself a photo discovery platform and allows users to sieve through content as per their tastes.


Screenshot of Galleri5.


Founded by Rahul Regulapati and Movin Jain, this social media platform is planning to win over users by making sure their pictures find the right target audience. They curate photos into “galleries”, and even allow users to add pics from Facebook, Instagram, or other social media platforms.


They curate photos into “galleries”, and even allow users to add pics from Facebook, Instagram, or other social media platforms.

You can earn “Karma” points by “high fiving” pictures on others feeds. The more the karma points, the more followers you are likely to get.

Ditch the baby pics, if you want

Rahul said he and Movin surveyed 1,000 users before rolling out the app, about 80% of whom said they would want to give their friends’ vacay pictures a skip and rather see photos that interest them.

“The end goal here is to organize all the photos people upload as per different topics so people can search, find, or follow things that interest them. Essentially, it’s a very strong discovery engine,” Rahul told Tech in Asia.

Galleri5 has a working Android app, but for now, Apple users can only get in by invitation. The team is also building a website, and all of it will be ready for an open launch by the end of March, Rahul said.

Once you log in, the app asks the user to pick topics of interest. One can choose from options like “Ladakh,” “Cakes,” “Travel,” “Love Thy City,” “Faces”, and others.

Rahul said there are about 1,000 users on the app already, buying into the promise of being able to filter feeds according to areas of interest.

Also buying into the promise are seasoned entrepreneurs in India — Rahul and Movin raised seed funding from Redbus founder Phanindra Sama and TaxiForSure’s founder Raghunandan G.

“Ours is not a commerce platform,” Rahul said. “It works on unit economics. We work on a fixed cost model.”

The team wants to keep the app free for users, while charging for corporate sponsorships. Galleri5 is already running one such program with travel website Cleartrip, and is in talks with the Taj Group of hotels for another.

Galleri5’s targeted filtering also means corporate houses can be more certain that their images reach the audience they want, instead of being tagged as irrelevant online ads.

“We are also careful about which brands we onboard because you don’t want to dilute the image. By the end of the year, we are looking at 5 to 10 brands we will engage and that’ll be more than enough to break even,” he said.


[Source:- Mashable]

Some Windows 10 Redstone features will be delayed until spring 2017

Some Windows 10 Redstone features will be delayed until spring 2017

Fresh revelations have been made about how and when Microsoft is going to push out the next major version of Windows 10, known as Redstone, which is planned to be deployed in two big updates.

According to sources who spoke to WinBeta, the first Redstone update, known as RS1, will be with us in June as expected (Microsoft always intended to get this out in the first half of 2016).

However, the not-so-great news for users and businesses awaiting new features on Windows 10 is that the second chunk of Redstone, RS2, which was originally supposed to be out before the year-end, has now been delayed to the spring of 2017.

That’s a little disappointing for keen feature-hounds, although this is only a rumour. However, RS2 will apparently play host to the features and tweaks Microsoft doesn’t have the time to get into RS1 – and a delay does make sense given what we’ve previously heard.

Namely that, back in January, it was revealed that the first major Redstone update (RS1) might not have all the features Redmond was hoping to cram in, because Microsoft had spent time working with OneCore, the underlying structure of the operating system, and focusing on reshaping the way the preview build system works internally.

Tighter integration

So what new features can we expect with RS1? According to WinBeta, it will mostly be about tying the various devices running Windows 10 – from computers, to the Xbox, to phones – more tightly together.

That will include pushing hard with the Windows Store, making it a hub for all content across Windows 10 devices, and introducing more Project Islandwood and Centennial apps.

The latter are Win32 desktop apps converted for the Windows Store and optimised for Windows 10, and earlier this week we got a hint that the full desktop Office 2016 suite is being readied for the Project Centennial treatment.

In other words, you’ll be able to snag the full desktop apps for Office on the store, as opposed to the touch-focused apps which are currently available there. And they’ll be a dead easy one-click download…

Another nifty feature planned for RS1 is that SMS texts and cellular calls from a Windows 10 smartphone will be brought to the desktop PC using Continuum tech, and users will be able to make phone calls via their PC.

Continuum, incidentally, now means much more than simply transforming interfaces for different screen sizes, and according to WinBeta, Continuum will be used “as a way of bringing Windows 10 devices closer together” in general.

Finally, we can also expect to see more big name games being shared across the Xbox and PC platforms.

And of course, whatever doesn’t make the cut for RS1 for whatever reason, will be introduced with RS2 – just not as soon as we might have thought.


[Source:- Techradar]