Search This Blog

Showing posts with label Company. Show all posts

Instagram to roll out new features to counter cyberbullying

Bullying. Sadly, it’s a pandemic that is not just restricted to the school grounds of our younger and geekier selves, but something which tends to follow people around regardless of age and even privacy. Cyberbullying has become more widespread than traditional bullying and is often known to be equally traumatic for its victims. A trend which tech companies are trying to increasingly address.

Instagram has new features (via The Verge) on its way that it’s hoping will address cyberbullying by finally allowing people to “shadow ban” others and a new artificial intelligence that is designed to flag potentially offensive comments. Both initiatives are looking to be put into testing soon.

The “shadow ban” will essentially provide a way for a user to restrict another user, without that person realising they are essentially banned. So they will still be able to see your post and comment on them, but their comments will only be visible to themselves meaning you and the rest of the people you actually want to interact with can keep talking in peace while said person wonders why their snarky comments are not getting any responses from you.

Along with this feature, Instagram is also hoping to leverage a new AI to flag potentially offensive comments and ask the commenter if they really want to follow through with posting. They’ll be given the opportunity to undo their comment, and Instagram says that during tests, it encouraged “some” people to reflect on and undo what they wrote. A nice touch, though given the emotional state most bullies are in, it’s unlikely to alter course for most people. Still, it’s better than nothing.

Instagram has already tested multiple bully-focused features, including an offensive comment filter that automatically screens bullying comments that “contain attacks on a person’s appearance or character, as well as threats to a person’s well-being or health” as well as a similar feature for photos and captions. So this shows a real effort by Facebook to tackle this problem on the platform.

Can AI become a new tool for hackers?

Over the last three years, the use of AI in cybersecurity has been an increasingly hot topic. Every new company that enters the market touts its AI as the best and most effective. Existing vendors, especially those in the enterprise space, are deploying AI  to reinforce their existing security solutions. Use of artificial intelligence (AI) in cybersecurity is enabling IT professionals to predict and react to emerging cyber threats quicker and more effectively than ever before. So how can they expect to respond when AI falls into the wrong hands?

Imagine a constantly evolving and evasive cyberthreat that could target individuals and organisations remorselessly. This is the reality of cybersecurity in an era of artificial intelligence (AI).

There has been no reduction in the number of breaches and incidents despite the focus on AI. Rajashri Gupta, Head of AI, Avast sat down with Enterprise Times to talk about AI and cyber security and explained that part of the challenge was not just having enough data to train an AI but the need for diverse data.

This is where many new entrants into the market are challenged. They can train an AI on small sets of data but is it enough? How do they teach the AI to detect the difference between a real attack and false positive? Gupta talked about this and how Avast is dealing with the problem.

During the podcast, Gupta also touched on the challenge of ethics for AI and how we deal with privacy. He also talked about IoT and what AI can deliver to help spot attacks against those devices. This is especially important for Avast who are to launch a new range of devices for the home security market this year.

AI has shaken up with automated threat prevention, detection and response revolutionising one of the fastest growing sectors in the digital economy.

Hackers are using AI to speed up polymorphic malware, causing it to constantly change its code so it can’t be identified.