GitHub invites programmers to speak directly to Copilot • The Register

0

In short GitHub is testing a new feature that will let developers ask its AI-powered programming assistant Copilot to generate code using voice commands.

The legally troubled, license-boring The technology is not a simple speech-to-text dictation engine that would require developers to read their program source line by line. Instead, the “Hey, GitHub!” works as a voice interface for Copilot, which automatically suggests code from prompts.

It is hoped that coders will be able to say things like general descriptions of functions aloud, and that Microsoft-owned Copilot will recommend the source to meet this request.

As usual, developers can decide if they want to keep or remove Copilot’s suggestions. Hey, GitHub! is designed to help them program faster using their voice. They can have the software automatically complete the boilerplate code and manually change any suggested output with their keyboards. The new feature can also be used to move code around or provide summaries to make scripts easier to read and understand.

Hey, Github! will be provision as part of the $10 subscription fee for Copilot. If you are interested, you can register for the technical preview here

New Amazon AI bot

Amazon introduced a new robotic arm, named Sparrow, that runs machine learning algorithms to automatically identify and sort items for packing. Useful when Amazon workers try to unionize or complain about working conditions and long hours.

Sparrow was featured on stage at Amazon’s Delivering the Future conference this week. It’s a tall L-shaped arm with a claw at one end; it uses suction cups on the fingertips of the gripper to pick up objects and sort them into bins. Jason Messinger, senior technical product manager for robotic manipulation at Amazon Robotics, said Sparrow can successfully grip all kinds of objects of different sizes, even if they have curved surfaces.

Using computer vision technology, the computer system controlling the robotic arm is able to recognize objects and could identify about 65% of Amazon’s inventory. “It’s not just about picking up the same things and moving them around with great precision, which is what we’ve seen in previous robots,” Messinger said, according at CNBC.

Amazon is invest in AI robots to perform tedious and repetitive tasks, potentially relieving themselves of the need to hire so many humans.

Midjourney launches improved AI-powered text-to-image tool

Midjourney, best known for creating a particularly artistic subscription-based text-to-image software, released version four.

“V4 is an entirely new codebase and an entirely new AI architecture,” Midjourney founder David Holz said on the company’s Discord channel. “This is our first model trained on a new Midjourney AI supercluster and has been in the works for over 9 months.”

The people of Ars Technica tested the model and noticed an improvement in v4’s ability to turn text prompts into images that featured better scene compositions and generated more appropriate object sizes relative to each other, compared to v3. The latest version was also better at producing more realistic images.

holtz already said The register he didn’t want Midjourney to get too good at generating images realistic enough to pass for fake photographs. “For us, when we were optimizing it, we wanted it to be beautiful, and beautiful doesn’t necessarily mean realistic.

“If anything, actually, we’re taking it a bit off the photos. … I know this technology can be used as a deeply fake super machine. And I don’t think the world needs more fake photos . I don’t really want to be a source of fake photos in the world.” ®

Share.

Comments are closed.