News & Blog

Harpoon to pierce junk tested in space for first time

A harpoon designed to clear harmful space junk has been tested for the first time.

The British-led RemoveDebris mission aims to tackle the problem of waste material in space left by rockets and other deployments.

Scientists say between 16,000 and 20,000 pieces of junk have been tracked orbiting the Earth.

The test was carried out by Airbus and involved a harpoon piercing through sample pieces of debris that were dangled on a boom about one-and-a-half metres from the spacecraft.

When the harpoon hits debris, a barb is deployed to secure it.

The harpoon pierces through the skin of debris

Image: The harpoon pierces through the skin of debris

Although the harpoon is still a number of years away from operational use, the experiment is a major step towards making it possible to clean up space junk as the number of spacecraft launches continues to increase.

Astronaut Tim Peake has revealed the damage orbital junk can cause to spacecraft.

More from Science & Tech

He shared an image of a chipped window panel on board the International Space Station in 2016.

Scientists believe something as small as a paint chip hurtling towards the space station could have caused the damage.

It is hoped when the harpoon is fully operational it will be capable of firing at debris up to 30 metres away.

Between 16,000 and 20,000 pieces of junk have been tracked orbiting the Earth.

Image: Between 16,000 and 20,000 pieces of junk have been tracked orbiting the Earth

Back on Earth, engineers are still trying to work out how the system can be used to target moving objects.

The harpoon – which is capable of travelling at 20 metres per second – is a joint initiative between Airbus, the University of Surrey and the Surrey Satellite Technology firm.

Previously, a RemoveDebris experiment showed how a net could be used to catch potentially dangerous pieces of rubbish orbiting the Earth.

Science Minister Chris Skidmore said: “Space debris can have serious consequences for our communications systems if it smashes into satellites.

When in full operation, scientists aim to make the harpoon fire at debris up to 30 metres away.

Image: When in full operation, the harpoon will fire at debris up to 30 metres away

“This inspiring project shows that UK experts are coming up with answers for this potential problem using a harpoon, a tool people have used throughout history.

“This mission is a powerful example of the UK’s expertise in space technology and that by working together, our world-class universities and innovative companies can hugely contribute to the government’s aims for a highly skilled economy through our modern Industrial Strategy.”

This story was originally published on Sky News Technology

PewDiePie: Roblox lifts ban after social media backlash

Online social game Roblox has reinstated PewDiePie’s account after banning the popular YouTuber over an “inappropriate username”.

PewDiePie revealed the ban in a video on his YouTube channel to his more than 85 million subscribers.

Other users then reported that they had received bans or warnings simply for mentioning the YouTuber’s name.

PewDiePie’s account was eventually reinstated, with Roblox calling the ban “incorrect”.

“Roblox is committed to providing a safe and civil platform for our players, including blocking memes that represent or are synonymous for behaviour that falls outside of our community standards,” it said in a post on its developer forums.

“In December, ‘pewdie’ became one of these negative memes on Roblox. As such, we began blocking the creation of new usernames that incorporated the term.

“The legacy account that PewDiePie used in his livestream was incorrectly banned as part of the administration of this policy.”

PewDiePie took the whole affair in his stride.

You may also like:

Others hit in the crossfire

Even before PewDiePie made a video announcing the ban, several people took to social media after they noticed a “purge” on anything related to the YouTuber.

Some claimed that items purchased in-game which featured PewDiePie branding had been removed from their accounts.

Others said that they had received bans for writing “subscribe to PewDiePie” in the game’s chat.

And Roblox player Rogos, who features in PewDiePie’s video, said that his account was disabled simply for writing “hi PewDiePie” when he saw the popular YouTuber in-game.

The “pewdie” meme

Meanwhile, some of the game’s players have questioned the authenticity of the “pewdie meme”.

“I have never seen anything like that on Roblox,” said KonekoKitten in a YouTube video. “Back in 2018, in December, not once did I see it.”

It may refer to the “subscribe to PewDiePie” messages that became ubiquitous with the latter part of last year.

Various paid adverts appeared in Roblox at the time urging people to subscribe to his YouTube channel, with some even reporting receiving direct messages about it.

But others have suggested that even if this ban was accurate, it would be next to impossible to follow as they were not aware of it.

“The ‘pewdie’ ban is not in the rules,” said Ericzona on Twitter. “Why was there no public warning for this ban?”

It is not clear whether users will face sanctions in the future for discussing PewDiePie in the game platform’s chat.

This story was originally published on BBC Technology News

Elon Musk's 'malicious' AI too dangerous to release

An artificial intelligence system developed by Elon Musk’s OpenAI organisation is too dangerous to be released, the group believes.

OpenAI is a non-profit research organisation founded in 2015 with $1bn in backing from Mr Musk and others to promote the development of artificial intelligence technologies that benefit humanity.

The system its researchers have developed, officially called GPT-2, can generate text as it would naturally occur in language and has been released in part.

However researchers are withholding the fully-trained algorithm “due to our concerns about malicious applications of the technology”.

“The model is chameleon-like, it adapts to the style and content of the conditioning text,” claimed the researchers, and included a number of examples to show how it worked.

To work, the algorithm is fed text of a variable amount and then outputs sentences based on its predictions of the material that should naturally follow next.

This means it appears to be capable of writing legitimate-looking news articles, potentially introducing the risk of people producing fake news to a theme being able to produce their fake content at scale.

More from Elon Musk

OpenAI believe the technology has several large policy implications.

In positive news, the researchers believe the technology could be used to develop AI writing assistants, dialogue agents – such as conversational interfaces for voice assistants – and aid with language translation and speech recognition.

However, the harms could be significant too. Because of the algorithm’s ability to copy the style it had been trained on, it could be used to impersonate others online and generate misleading news articles.

It could also automate the production of abusive or fakes content to post on social media, as well as the production of spam and phishing content.

“These findings, combined with earlier results on synthetic imagery, audio, and video, imply that technologies are reducing the cost of generating fake content and waging disinformation campaigns,” say the researchers.

“The public at large will need to become more sceptical of text they find online, just as the ‘deep fakes’ phenomenon calls for more scepticism about images.”

The company’s chief technology officer, Greg Brockman, posted one particularly convincing bit of text that OpenAI claimed its original algorithm produced and which one of the employees posted by the recycling bin.

Previously, Mr Musk has criticised Facebook boss Mark Zuckerberg for having a “limited” understanding of artificial intelligence in a spat over the potential dangers of advances in the field.

Mr Musk, alongside scientists such as Stephen Hawking, warned of the potential moment at which artificial intelligence develops the ability to redesign itself.

They warned that if this happens there could be an intelligence explosion as the machine rapidly redesigns itself before humankind could even catch up.

Many researchers fear that this could potentially lead to human extinction.

This story was originally published on Sky News Technology