Software on Mars rover allows it to pick research targets autonomously

Software on Mars rover allows it to pick research targets autonomously

Taking only 21,000 of the Curiosity mission’s total 3.8 million lines of code, AEGIS accurately selected desired targets over 2.5 kilometers of unexplored Martian terrain 93% of the time, compared to the 24% expected without the software. …more

(Phys.org)—A team of researchers form the U.S., Denmark and France has created a report regarding the creation and use of software meant to give exploratory robots in space more autonomy. In their paper published in the journal Science Robotics, the team describes the software, called Autonomous Exploration for Gathering Increased Science (AEGIS), and how well it performed on the Mars rover Curiosity.

Because of their limited computing power and distance from the Earth, space scientists believe that it would be advantageous for exploratory robots to have the ability to select which things to study. It would also allow for more research to be done when a robot is not able to communicate with Earth, such as when it is on the opposite face of a planet. Without such a system, a robot would have to scan a region, photograph it, send the photographic images back to Earth and then wait for instructions on what to do. With such a system, a robot such as Curiosity could scan the horizon, pick an object to study and then drive over and study it. This approach would save a lot of time, allowing the robot to study more objects before its useful lifespan expires. Because of that, NASA commissioned a team to create such software, which eventually became AEGIS. The software was tested and then uploaded to Curiosity in May of 2016 and was used 54 times over the next 11 months.

The software allows the rover to control what has been dubbed the ChemCam, which is a device that is used to study rocks or other geologic features—a laser is fired at a target and then sensors measure the gases that occur as a result.

The researchers report that they found the system to be 93 percent accurate compared to 24 percent without its use. The software, they claim, saved many hours of mission time, which was used for engaging in other useful activities such as studying meteorite content. They also report that the software allowed for an increase in ChemCam targeting from 256 per day to 327, which meant that more data was collected in the same amount of time.

Software on Mars rover allows it to pick research targets autonomously
(A) The ChemCam gaze. (B) ChemCam shoots lasers at rocks to analyze their content, leaving visible marks both on the surface (upper right) and inside the 16-mm-diameter drill hole (center) of this “Windjana” drill site. (C) ChemCam-measured …more
Software on Mars rover allows it to pick research targets autonomously
Examples of AEGIS target selection, collected from Martian day 1400 to 1660. Targets outlined in blue were rejected; those outlined in red were retained. Top-ranked targets are shaded green, and second-ranked targets are shaded orange. …more
Software on Mars rover allows it to pick research targets autonomously
Examples of AEGIS fixing human commands that miss the mark, called “autonomous pointing refinement.” (A, C) Human-calculated targets in red. (B, D) Target refinement by AEGIS indicated in red. Credit: Francis et al., Sci. Robot. 2, eaan4582 (2017)

Explore further: Curiosity Mars rover can choose laser targets on its own

More information: AEGIS autonomous targeting for ChemCam on Mars Science Laboratory: Deployment and results of initial science team use, Science Robotics (2017). robotics.sciencemag.org/lookup/doi/10.1126/scirobotics.aan4582

Abstract
Limitations on interplanetary communications create operations latencies and slow progress in planetary surface missions, with particular challenges to narrow–field-of-view science instruments requiring precise targeting. The AEGIS (Autonomous Exploration for Gathering Increased Science) autonomous targeting system has been in routine use on NASA’s Curiosity Mars rover since May 2016, selecting targets for the ChemCam remote geochemical spectrometer instrument. AEGIS operates in two modes; in autonomous target selection, it identifies geological targets in images from the rover’s navigation cameras, choosing for itself targets that match the parameters specified by mission scientists the most, and immediately measures them with ChemCam, without Earth in the loop. In autonomous pointing refinement, the system corrects small pointing errors on the order of a few milliradians in observations targeted by operators on Earth, allowing very small features to be observed reliably on the first attempt. AEGIS consistently recognizes and selects the geological materials requested of it, parsing and interpreting geological scenes in tens to hundreds of seconds with very limited computing resources. Performance in autonomously selecting the most desired target material over the last 2.5 kilometers of driving into previously unexplored terrain exceeds 93% (where ~24% is expected without intelligent targeting), and all observations resulted in a successful geochemical observation. The system has substantially reduced lost time on the mission and markedly increased the pace of data collection with ChemCam. AEGIS autonomy has rapidly been adopted as an exploration tool by the mission scientists and has influenced their strategy for exploring the rover’s environment.

[“Source-ndtv”]

 

Google Glass, Apple Newton, Nokia N-Gage Make It to ‘Museum of Failure’

Image result for Google Glass, Apple Newton, Nokia N-Gage Make It to 'Museum of Failure'

HIGHLIGHTS
Google Glass has made its entry to the Museum of Failure
Joining it are the Apple Newton and Nokia N-gage
The Museum of Failure is open to the public in downtown Helsingborg
Google and Apple are not synonymous with failure but in the risky business of innovation, anything is possible. At Sweden’s newly opened Museum of Failure, Google Glass and Apple Newton are two such devices that were either ahead of their time or the results of some bad ideas.

Founded by clinical psychologist Samuel West, the museum that opened on June 7 to the public has over 70 failed products and services from around the world.

“We know that 80 to 90 percent of innovation projects, they fail and you never read about them, you don’t see them, people don’t talk about them. And if there’s anything we can do from these failures, it’s learn from them,” West told CBS News.

The list has Nokia “N-gage” device, Orbitoclast Lobotomy (medical instrument), Harley-Davidson Perfume, Kodak Digital Camera, Sony Betamax and Lego Fiber Optics, among others, the information available on the Museum of Failure website stated.

Developed and marketed by Apple Inc starting 1987, Newton was one of the first personal digital assistants to feature handwriting recognition. Apple shipped the first devices in 1993.

Initially considered as innovative, Apple founder Steve Jobs directed the company to stop the production of Apple Newton devices in 1998.

 

According to reports, Newton devices ran on a proprietary operating system called Newton OS. The high price and early problems with its handwriting recognition feature limited its sales.

Google Glass, an eye-wearable device, created a storm when the company handed over a prototype to a few “Glass Explorers” in 2013 for $1,500 (roughly Rs. 1,00,000).

The optical head-mounted display became available to public in May 2014 but was discontinued in 2015 owing to privacy and safety concerns. The device, however, is now gaining momentum in the medical industry.

Nokia made “N-Gage” mobile device and handheld game system that ran on Series 60 platform on Symbian OS. The device, released in October 2003, was discontinued two years later. “N-Gage” suffered from a poor gaming library.

The Museum of Failure is open to the public in downtown Helsingborg.

[“Source-ndtv”]

Do You Know What it Feels Like to Get Hacked?

Do You Know What It Feels Like To Get Hacked? | Social Media Today

Hopefully your answers is “no”, and the intention of this blog is to keep you cyber safe in 2017.

Remember the hack of the Ashley Madison site? The top 3 passwords used on that site were “123456”, “12345” and “password”.

While there are no guarantees that malicious actors won’t get to your information, the following tips will decrease the probability of having your personal information hacked.

Let’s do some cyber maintenance. In addition to changing your passwords, learn other ways to make your cyber presence safer.

1. Have Complicated, Unique, Difficult-To-Crack Passwords

Hate changing your passwords for your social media, online banking, Amazon.com and other online accounts? So do I. But having someone invade your privacy, social channels, or even financial information is a lot worse.

A good solution to create strong passwords (and track them at the same time) is to sign up for a password storage tool. 1Password carries a yearly fee, and I’ve also heard good things about a free tool called LastPass.

All you need to do, once you have such a tool, is to create one really complex password and remember it. Then you can let the tool auto-generate all your other long and tricky passwords, which you won’t need to remember.

2. Never Reuse a Password

Don’t use the same password or slightly modify it to use it on multiple accounts.

Make each password unique, with a mix of upper and lower case letters, numbers, special characters – at least 9 characters, ideally more.

3. Update Your Passwords Regularly

Change your passwords periodically (at least every 6-12 months). While having a really difficult password is the number one way to protect your accounts, changing your password can’t hurt.

4. Prevent “Dictionary Attacks”

Don’t use dictionary words, your pet’s name, your college or any other words that have an obvious correlation to you as a person. These are easy to find, even just via Google, and so-called “dictionary attacks” – which are extremely common and simple – can crack those passwords in no time.

NOTE: Personally, I also discourage publishing your birthday on LinkedIn or Facebook as this date is a crucial detail to cracking and taking over your (online) identity; especially in the USA where birth date and social security number ARE your identity.

5. Keep Your Security and Privacy Settings Current

Facebook, LinkedIn and other social media channels occasionally change their privacy options, which is easy to miss (or dismiss) as such updates are not particularly interesting.

For a safe 2017, visit your social channels and review your privacy and notification settings. While you’re there, disconnect access for apps you no longer use.

6. Enable Two-Factor-Authentication

Something often dismissed as too complicated is two-step-verification.

Most social platforms, banks and other accounts now provide this as an option – here’s how it works:

  • In addition to your password, every time you sign in, you get a text message or app notification with a code that you need to enter before you get access to your account.
  • You’ll be asked to specify your trusted device(s) to receive the code, e.g. your iPhone or iPad, so only you have access.

7. Don’t Store Passwords in Your Browser

I know, it seems convenient but hackers feel the same way.

Browser attacks are very common – here’s some more information on common threats by Kaspersky.

8. Have a Security Program Installed

You need a virus protection program at a minimum, and many of these now come with privacy packages to help you in case you do get hacked.

Here’s a suggestion for 10 virus protection programs. Also consider a service that alerts you to invasions into your personal information, like changes in your credit report. One option is Lifelock.

9. Install Software Updates

Don’t dally when it comes to installing updates to your applications, operating system or website. While I admit that I sometimes wait a few days when a new OS update comes out so that the main bugs can be fixed first, I never wait for more than a week.

10. Be Suspicious of URLs Before You Click

Phishing is generally an attempt to get users to click on a malicious URL that will upload a virus if you do.

Never click on a URL sent by your bank, PayPal or other account that requires you to sign in.

Often, malicious actors will steal your password that way, or upload a virus. Instead, go to the site directly and log-in from there to check on any message.

Also, be suspicious about the senders of any message you receive via email or social media. Sometimes when I see a shortened link, I ask the sender to give me the URL to look it up myself or I pass.

 

 

[Source:- Socialmediatoday]

The Power of Geofencing and How to Add it to Your Marketing [Infographic]

The Power of Geofencing and How to Add It to Your Marketing | Social Media Today

Thanks to the ubiquitous smartphone, there’s now an entirely new level of marketing available: Geofencing.

Geofencing is a location-based marketing tool that enables more active consumer focus. There are three ways to track a customer’s location: GPS, Bluetooth, and beacons, and each method finds and targets customers in different ways. And while it’s a relatively new technology, it’s important for marketers to be aware of- and understand – geofencing in order to help their company’s bottom line.

A solid 30% of the international population uses location-based services, and an overwhelming majority of them are open to receiving location-based alerts from businesses. This can help increase sales and loyalty, especially when paired with CRM data, because you can offer individual customers the messaging they need in order to make a conversion.

With 92% of U.S. smartphones capable of responding to geofencing, this is a powerful marketing tool that retailers and other location-based companies need to use.

[Source:- socialmediatoday]

Google DeepMind: What is it, how does it work and should you be scared?

Image result for Google DeepMind: What is it, how does it work and should you be scared?

Today concludes the five ‘Go’ matches played by AlphaGo, an AI system built by DeepMind and South Korean champion, Lee Sedol. AlphaGo managed to win the series of games 4-1.

‘Go’ is a strategy-led board game in which two players aim to gather and surround the most territory on the board. The game is said to require a certain level of intuition and be considerably more complex than Chess. The first three games were won by AlphaGo with Sedol winning the fourth round, but still unable to claim back a victory.

What is DeepMind?

Google DeepMind is an artificial intelligence division within Google that was created after Google bought University College London spinout, DeepMind, for a reported £400 million in January 2014. 

The division, which employs around 140 researchers at its lab in a new building at Kings Cross, London, is on a mission to solve general intelligence and make machines capable of learning things for themselves. It plans to do this by creating a set of powerful general-purpose learning algorithms that can be combined to make an AI system or “agent”. 

Suleyman explains

These are systems that learn automatically. They’re not pre-programmed, they’re not handcrafted features. We try to provide a large set of raw information to our algorithms as possible so that the systems themselves can learn the very best representations in order to use those for action or classification or predictions.

The systems we design are inherently general. This means that the very same system should be able to operate across a wide range of tasks.

That’s why we’ve started as we have with the Atari games. We could have done lots of really interesting problems in narrow domains had we spent time specifically hacking our tools to fit the real world problems – that could have been very, very valuable. 

Instead we’ve taken the principal approach of starting on tools that are inherently general. 

AI has largely been about pre-programming tools for specific tasks: in these kinds of systems, the intelligence of the system lies mostly in the smart human who programmed all of the intelligence into the smart system and subsequently these are of course rigid and brittle and don’t really handle novelty very well or adapt to new settings and are fundamentally very limited as a result.

We characterise AGI [artificial general intelligence] as systems and tools which are flexible and adaptive and that learn. 

We use the reinforcement learning architecture which is largely a design approach to characterise the way we develop our systems. This begins with an agent which has a goal or policy that governs the way it interacts with some environment. This environment could be a small physics domain, it could be a trading environment, it could be a real world robotics environment or it could be a Atari environment. The agent says it wants to take actions in this environment and it gets feedback from the environment in the form of observations and it uses these observations to update its policy of behaviour or its model of the world. 

How does it work? 

The technology behind DeepMind is complex to say the least but that didn’t stop Suleyman from trying to convey some of the fundamental deep learning principles that underpin it. The audience – a mixture of software engineers, AI specialists, startups, investors and media – seemed to follow. 

Suleyman explains 

You’ve probably heard quite a bit about deep learning. I’m going to give you a very quick high-level overview because this is really important to get intuition for how these systems work and what they basically do. 

These are hierarchical based networks initially conceived back in the 80s but recently resuscitated by a bunch of really smart guys from Toronto and New York.

The basic intuition is that at one end we take the raw pixel data or the raw sensory stream data of things we would like to classify or recognise. 

This seems to be a very effective way of learning to find structure in very large data sets. Right at the very output we’re able to impose on the network some requirement to produce some set of labels or classifications that we recognise and find useful as humans. 

How is DeepMind being tested? 

DeepMind found a suitably quirky way to test what its team of roughly 140 people have been busy building. 

The intelligence of the DeepMind’s systems was put through its paces by an arcade gaming platform that dates back to the 1970s. 

Suleyman demoed DeepMind playing one of them during his talk – space invaders. In his demo he illustrated how a DeepMind agent learns how to play the game with each go it takes. 

Suleyman explains 

We use the Atari test bed to develop and test and train all of our systems…or at least we have done so far. 

There is somewhere on the magnitude of 100 different Atari games from the 70s and 80s. 

The agents only get the raw pixel inputs and the score so this is something like 30,000 inputs per frame. They’re wired up to the action buttons but they’re not really told what the action buttons do so the agent has to discover what these new tools of the real world actually mean and how they can utilise value for the agent. 

The goal that we give them is very simply to maximise score; it gets a 1 or a 0 when the score comes in, just as a human would. 

Everything is learned completely from scratch – there’s absolutely zero pre-programmed knowledge so we don’t tell the agent these are Space Invaders or this is how you shoot. It’s really learnt from the raw pixel inputs. 

For every set of inputs the agent is trying to assess which action is optimal given that set of inputs and it’s doing that repeatedly over time in order to optimise some longer term goal, which in Atari’s sense, is to optimise score. This is one agent with one set of parameters that plays all of the different games.

Live space invaders demo

An agent playing space invaders before training struggles to hide behind the orange obstacles, it’s firing fairly randomly. It seems to get killed all of the time and it doesn’t really know what to do in the environment. 

After training, the agent learns to control the robot and barely loses any bullets. It aims for the space invaders that are right at the top because it finds those the most rewarding. It barely gets hit; it hides behind the obstacles; it can make really good predictive shots like the one on the mothership that came in at the top there. 

As those of you know who have played this game, it sort of speeds up towards the end and so the agent has to do a little bit more planning and predicting than it had done previously so as you can see there’s a really good predictive shot right at the end there. 

100 games vs 500 games

The agent doesn’t really know what the paddle does after 100 games, it sort of randomly moves it from one side to the other. Occasionally it accidentally hits the ball back and finds that to be a rewarding action. It learns that it should repeat that action in order to get reward. 

After about 300 games it’s pretty good and it basically doesn’t really miss. 

But then after about 500 games, really quite unexpectedly to our coders, the agent learns that the optimal strategy is to tunnel up the sides and then send them all around the back to get maximum score with minimum effort – this was obviously very impressive to us. 

We’ve now achieved human performance in 49/57 games that we’ve tested on and this work was recently rewarded with a front cover of Nature for our paper that we submitted so we were very proud of that. 

How is it being used across Google? 

Google didn’t buy DeepMind for nothing. Indeed, it’s using certain DeepMind algorithms to make many of its best-known products and services smarter than they were previously. 

Suleyman explains

Our deep learning tool has now been deployed in many environments, particularly across Google in many of our production systems.

In image recognition, it was famously used in 2012 to achieve very accurate recognition on around a million images with about 16 percent error rate. Very shortly after that it was reduced dramatically to about 6 percent and today we’re at about 5.5 percent. This is very much parable with the human level of ability and it’s now deployed in Google+ Image Search and elsewhere in Image Search across the company.

As you can see on Google Image Search on G+, you’re now able to type a word into the search box and it will recall images from your photographs that you’ve never actually hand labelled yourself. 

We’ve also used it for text and scription. We use it to identify text on shopfronts and maybe alert people to a discount that’s available in a particular shop or what the menu says in a given restaurant. We do that with an extremely high level of accuracy today. It’s being used in Local Search and elsewhere across the company. 

We also use the same core system across Google for speech recognition. It trains roughly in less than five days. In 2012 it delivered a 30 percent reduction in error rate against the existing old school system. This was the biggest single improvement in speech recognition in 20 years, again using the same very general deep learning system across all of these. 

Across Google we use what we call Tool AI or Deep Learning Networks for fraud detection, spam detection, hand writing recognition, image search, speech recognition, Street View detection, translation. 

Sixty handcrafted rule-based systems have now been replaced with deep learning based networks. This gives you a sense of the kind of generality, flexibility and adaptiveness of the kind of advances that have been made across the field and why Google was interested in DeepMind.

GoogleDeepmind, AlphaGo and Go: the discussion

Techworld editors discuss the implications of AlphaGo beating all the humans.

The number of scientists and world-famous entrepreneurs speaking out on the potential dangers of AI is increasing week-by-week, with renowned physicist Stephen Hawking and PayPal billionaire Elon Musk being two of the most outspoken anti-AI advocates. 

The pair, along with several others including Bill Gates and Sky cofounder Jaan Tallinn, believe that machines will soon become more intelligent than humans, just as they do in recent Hollywood blockbuster Ex Machina. 

Despite this, Google is keen to develop its AI algorithms as much as possible in order to improve its offerings and boost its profits. 

Suleyman tried to put people’s minds at ease and explain the logic behind all the hype. 

Suleyman explains

Over the last 18 months or so, AI breakthroughs have, I think, created a sense of anxiety or in some cases hype around the potential long term direction of the field. 

This of course is not least induced by Elon [Musk] who recently Tweeted that we need to be super careful with AI because it’s “potentially more dangerous than nukes” and that’s obviously backed up by various publications including Nick Bostrom’s – all culminating in this kind of sense that AI has the potential to end all human kind. 

If you didn’t really pay attention to the field and all you did was read, as I think the vast majority of people do, descriptions of the kind of work that we do on the web then you could be forgiven for believing that AI is actually about this. Whether it’s Terminator coming to blow us up or societies of AIs or mad scientists looking to create quite perverted women robots.

This narrative has somehow managed to dominate the entire landscape, which I think we find really quite remarkable. 

It’s true that AI has in some sense really arrived. This isn’t just a summer. These are very concrete production breakthroughs that really do make a big different but it’s also sad how quickly we adapt to this new reality. We rarely take time to acknowledge the magic and the potential of these advances and the kind of good that they can bring. In some sense, the narrative has shifted from isn’t it terrible that AI has been such a failure to isn’t it terrible that AI has been such a success. 

Just to address directly this question of existential risk. Our perspective on this is that it’s become a real distraction from the core ethics and safety issues and that it’s completely overshadowed the debate. 

The way we think about AI is that it’ll be a hugely powerful tool that we control and direct whose capabilities we limit, just as we do with any other tool that we have in the world around us, whether they’re washing machines or tractors. 

[Source:- Techworld]

BT gets another chance to fix its broadband: Here’s what it means for you

BT gets another chance to fix its broadband: Here's what it means for you

Ofcom, BT and OpenReach – what this means to you

BT used to like to tell us that “it’s good to talk”, but this morning it probably wasn’t the happiest of phone calls with the telecoms regulator Ofcom, which has told BT that it needs to seriously work on its relationship with its Openreach subsidiary.

While the regulator hasn’t said the two should completely break up, it is basically telling BT “it’s not me, it’s you”, and has recommended steps that would see BT and Openreach consciously uncouple further than ever before.

As when any relationship goes sour, there’s one big question: What does that mean for the kids? Or in this case, the millions of people in Britain who rely on BT and Openreach to provide their broadband? Read on to find out.

Umm, what actually is Openreach?

BT is a slightly weird company, owing to its unique history. For most of the 20th century, it was owned by the government and was actually part of the Post Office, but in 1984 it was privatised by the Thatcher government.

This meant that BT had a complete monopoly on all of the infrastructure across the country that used to provide our phone lines and today provides our broadband.

Fast-forward to 2016 and this is still mostly the case. It means that whether you decide to get your broadband from Sky, TalkTalk, or BT itself (or one of the many other providers), ultimately the data will be flowing through fibre-optic cables and copper wires that are owned and maintained by BT. It’s why when you sign up for Sky, it still insists that you must have a BT compatible line.

(The only major exception to this is Virgin Media, which has built its own entirely separate network – though other completely separate fibre companies are also growing).

As you might imagine, this situation would theoretically put BT at an advantage. What would stop it from offering faster speeds to BT customers than Sky? It’s for this reason that in 2005 Ofcom insisted that BT keep the Openreach division mostly separate from the rest of the company – and insisted that it must treat other ISP customers the same way that it treats BT consumer customers.

This, incidentally, is why if you’ve ever had a maddening encounter with BT customer services that it sometimes feels that one part of BT isn’t talking to the other – because in the case of Openreach it is literally restricted in how it can do so.

Despite their relative separation, this hasn’t kept BT’s rivals happy. Last year Sky and TalkTalk called for Ofcom to intervene and spin-off Openreach into a completely separate company.

So what has Ofcom told BT to do?

The big news today is that Ofcom appears to partially agree with these concerns and has proposed a package of changes that will see Openreach further separated from the BT mothership. Although it will allow BT to continue to own Openreach, it wants the division to be a distinct entity within the company, with its own board and chairperson (albeit while still wholly owned by BT).

It wants the new company to consult with customers (that’s Sky and TalkTalk – not us consumers) when it makes big infrastructure investments, and it wants the new company to own its assets (the fibre network) and employ staff directly, rather than via other parts of BT, so there are no conflicting loyalties.

It even wants Openreach to have a separate brand and logo, so that people don’t automatically associate it with BT.

What will this mean for customers?

The motivation behind these reforms is that it will hopefully make the market more competitive. This could mean ultimately, better and faster broadband service for customers, or lower prices. By bringing Openreach’s customers into the decision making process, this could mean that better decisions are made about where on the network needs improving.

And Openreach will have to work harder to succeed on its own terms, rather than simply act as an appendage to BT.

Ofcom has also made a number of other demands which could increase competitiveness. For example, it is forcing Openreach to produce an online database detailing where its telegraph poles and underground tunnels are, so that other providers can more easily plug into BT’s network.

It could mean we start seeing more “Fibre to the Premises” broadband from different companies, as they can plug directly into the Openreach network, rather than rely on BT’s existing copper wire network to do the last stretch between the telephone exchange and your house.

This all goes in tandem with other new rules announced by Ofcom, such as automatic compensation when services fail, and new rules to make switching broadband provider less of a nightmare than it is now. If it’s easier to switch, then all ISPs will have to work harder to keep you happy – which can only be good for consumers.

So it could be good news. Ofcom could be the friend that everyone needs: Someone who will tell them the harsh truth, even if they don’t want to hear it.

If BT truly wants to improve its relationship, perhaps it needs to stop being so clingy with Openreach?

 
[Source: Techradar]

This security threat has hit almost half of UK businesses and it will get worse

This security threat has hit almost half of UK businesses and it will get worse

A new piece of research has found that approaching half of all businesses have been hit by aransomware attack over the last year.

The study from Malwarebytes questioned over 500 IT leaders from companies across the UK and Germany, as well as Canada and the US, and found that almost 40% said they’d experienced a ransomware attack during the past year.

That’s a pretty staggering figure which shows the amount of cybercriminals now wanting to target companies for online extortion – because obviously enough, demands can be higher when made to a business (particularly a large one) as opposed to an individual user.

Of those organisations which were victimised, over 40% ended up paying the ransom. The typical demand was over $1,000 (around £750, AU$1,320) in 60% of cases, but one in five demanded over $10,000 (around £7,500, AU$13,200) to unlock data.

Ground to a halt

Over a third of companies hit by ransomware said they lost revenue due to the incident, and 20% had their business stopped entirely for a time. Over 60% of incidents took longer than nine hours to deal with, Malwarebytes found.

As for how the attacks were delivered, the largest amount – 46% – were initiated via an email. No surprises there – although email was less prevalent as an attack vector in the UK, where it accounted for only 39% of attacks. In the US, it was responsible for 59%.

Nathan Scott, Senior Security Researcher at Malwarebytes , commented: “Over the last four years, ransomware has evolved into one of the biggest cyber security threats in the wild, with instances of ransomware in exploit kits increasing 259% in the last five months alone.”

As ever, staff members need to be educated on avoiding malware and have security policies to follow, but if your business is unfortunate enough to fall victim to a scam, check out our feature discussing whether or not you should ever pay up to ransomware criminals.

 

[Source: Techradar]

Opinion: When Chrome, YouTube and Firefox drop it like it’s hot, Flash is a dead plugin walking

Opinion: When Chrome, YouTube and Firefox drop it like it's hot, Flash is a dead plugin walking

But we wanted more: interactivity, responsiveness, perhaps even a little bit of bling. Flash made this happen, and animators and designers could create all the interactivity they wanted and wrap it up in a file that was inserted into the web page and downloaded on request.

The web is a hostile place for browsers, however, and the more functionality exposed to the web, the larger the surface exposed to attack. Flash offers a large attack surface, and because animation is often computationally demanding, Flash needed deep access to many aspects of the computer to work well, making any flaw potentially serious.

Security isn’t the only problem with Flash. For example it wasn’t security but Flash’s demanding processor and battery consumption that caused Steve Jobs to banish Flash from the iPhone and iPad. On a device with such limited resources as a smartphone or tablet, Flash just doesn’t fit.

While these drawbacks could be tackled, Flash’s proprietor Adobe seems uninterested in doing so, having not released an update to Flash Player on mobile since 2012.

Flash forward to the future

Yet Flash endures, mainly on account of the last 20 years in which websites have been created using it and the plugin has been installed in billions of browsers. There have been attempts at alternatives: Microsoft’s Silverlight was Windows-specific and never caught on, and even the company itself urges people not to use it; Java applets have even worse problems than Flash, and have already been deprecated or removed from modern browsers.

The best hope for the elimination of Flash is HTML 5. The latest version of HTML, the markup language in which web pages are written, finally includes support for directly embedding video and audio in a web page. In combination with JavaScript, web pages can now offer all the interactivity and animated bling that anyone could want. Having previously been without a doubt the largest user of Flash, YouTube now uses an HTML 5-based player as default for its video content. Google’s Chrome browser dropped support for Adobe Flash some time ago, and uses only its own version.

HTML 5 has two major advantages over Flash. As a much more modern technology (2014 versus 1995) it delivers better results with fewer resources, making it better suited to mobile devices. But more importantly it requires no plugin, which means the surface open to attack by hackers doesn’t expand just because you want to watch a video, or because some site wants to display an animated advert.

Of course there are still sites that use Flash extensively, and these will have to be redesigned in HTML 5. While these sites still exist and people wish to use them, the Flash problem will not go away.

It’s more than just Flash

Flash’s problems make it an easy target, but it’s just one place where security failures occur. Of the zero-day exploits discovered so far in the Hacking Team leak, three relate to Flash, one to Java, one to a font processor for Windows (also made by Adobe), and one to Microsoft’s Internet Explorer 11 browser. But security is hard, no software is invulnerable, and breaches like this will continue to happen. Even if Flash is somehow secured – or disappears entirely – security flaws will still be found and exploited in other software. Security is an ongoing journey, not a destination.

The bigger problem is how the exploits originate. Hacking Team didn’t discover most of these exploits – they bought them from hackers who found them, keeping them secret for use in their products. Perhaps this is why a security firm such as Hacking Team becomes a tempting target for criminals, as a concentrated source of zero-day exploits.

As governments and intelligence agencies collect more information, they will also become more valuable targets. If Britain’s GCHQ is able to bypass all encryption, as prime minister David Cameron has suggested, then all our data could be vulnerable to anyone who can find the slightest crack in GCHQ’s armour.

 

After more than 20 years making the web a slightly more interesting and interactive place, albeit one that pandered to designers’ worst excesses and (in pre-broadband days) led to interminable download waiting times, the word on the net is that Adobe Flash Must Die.

The ironic hack of Hacking Team, the controversial security and surveillance software firm, exposed yet another brace of security flaws and vulnerabilities in Flash, the hugely popular multimedia animation plugin for web browsers. This may be the final straw: Mozilla has disabled Flash by default in its Firefox browser, and Facebook’s chief of security has called for Adobe to set a date when the program will be taken behind the shed and shot.

Why hate Flash?

The software and services that Hacking Team sells provide the means for its government and law enforcement clients to break into and even control computers remotely through the internet. The huge leak of the firm’s company data also revealed details of previously unknown vulnerabilities in software that could be exploited to provide ways of hacking computers – known as zero-day vulnerabilities because the software’s manufacturer has no time to fix the problem.

Zero-day vulnerabilities are great news for criminals. Three of these vulnerabilities were in Flash, and some of those revealed in the leaked documents appeared in attack kits available online within hours – faster than the developers of the affected programs could fix the holes, let alone distribute the updates to millions of users worldwide.

The Flash plugin is notorious for being riddled with security flaws and other shortcomings. Yet it’s also one of the most popular pieces of software on the planet. So what will it take to kill it?

It seemed like a good idea at the time

Back in the web’s dim and distant past (the 1990s), web pages were static, unyielding things with just text and images and occasionally a dumb animated GIF that everyone but the designer hated.

 

[Source: Phys.org]

Democrats are staging a sit-in to protest state of being inactive on gun manage — but you may best see it on social media

Https%3a%2f%2fblueprint-api-production.s3.amazonaws.com%2fuploads%2fcard%2fimage%2f124324%2fsit_in
At one point, C-SPAN resorted to airing a stay Periscope feed from Rep. Scott Peters of California.
it’s now not real of the contributors of Congress, who’ve became to social media tools to deliver a good dealwanted visuals to the story.

consultant Charles Rangel used Periscope to broadcast the scene, while others tweeted and published pictures to facebook.

The social media blitz worked. As of Wednesday afternoon, #NoBillNoBreak become the pinnacle trending topic within the U.S.
With the cameras off, cable news coverage switched from side to side between masking the sit-in and now not. but as individuals of congress began to tweet and Periscope their protest, others on Twitter picked up what was occurring.

more politicians, activists and others started out tweeting using, generally, the hashtag #NoBillNoBreak, a connection with passing a gun manipulate invoice.

Dear Windows, OS X folks: Update Flash now. Or kill it. Killing it works

Adobe has published new versions of Flash to patch a vulnerability being exploited right now by hackers to hijack PCs and Macs.

The APSB16-10 update addresses a total of 24 CVE-listed flaws, including one (CVE-2016-1019) that’s been exploited in the wild to inject malware into Microsoft Windows and Apple OS X systems.

Users running the Flash Player versions 21.0.0.197 and prior for Windows, OS X, Linux and ChromeOS are advised to update the plugin to address the vulnerabilities. For Flash Player Extended Support, the vulnerable software is version 18.0.0.333 and earlier and Flash Player for Linux version 11.2.202.577.

Among the vulnerabilities patched in the update is CVE-2016-1019, a remote code execution vulnerability that is currently being exploited in the wild by the Magnitude Exploit Kit. According toresearchers with Trend Micro, the flaw is being targeted in both Windows and OS X systems to perform automated malware installs.

Simply browsing a webpage booby-trapped with a malicious Flash file is enough to trigger execution of evil code, allowing miscreants to potentially snoop on victims’ passwords and other sensitive information on their computers.

Adobe is recommending that users update Flash as soon as possible to patch the flaws. Users running Chrome, Internet Explorer and Edge will automatically get the update when updating their browser.

Researchers warned earlier this week that the CVE-2016-1019 zero-day was being targeted in the wild and that an out-of-band security patch to address the vulnerability was in the works.

The patch is the latest fix for a Flash plugin that has become a favorite target for exploits and drive-by malware attacks. Researchers have suggested that users and administrators disable Flash Player in order to prevent attacks.

One company doing just that is Microsoft, who announced earlier today that upcoming versions of the Edge browser would be disabling “non-central” Flash content in webpages by default (users can change the setting). Microsoft is also recommending that site owners consider moving their pages to newer, safer formats such as HTML5.

 
[Source:- Theregister]