Open the door remotely, change your face and sound? How to ensure security in the AI era?
CCTV News:Network security is related to many aspects of production and life. In recent years, artificial intelligence technology has brought many conveniences to people, but also brought many new challenges to network security.
A loophole may become a hacker’s "key"
Nowadays, unmanned driving is no longer a scene in science fiction, but has become a reality in many places in China. However, what will happen if a smart car is attacked by a network? Network security experts simulated this scene for us.
Network security expert Deng Qinyang:Hackers invade our vehicles through the network, and then move horizontally to take advantage of related loopholes in parts. Now the user is driving the vehicle normally, and then I can simulate a hacker’s remote attack on our door (system) to realize an operation of opening the door remotely. Now our system detects an instruction to open the door remotely during driving, so it is very dangerous for our door to be opened during real driving.
According to experts, if the security vulnerability is only aimed at the application level, the threat is limited to individuals. However, if hackers attack the underlying systems such as driving and communication, the damage will be more extensive.
Build a "safety base" and fasten the network "safety belt"
The data shows that in 2023, there were 8.05 million attacks on domestic car networking service platforms, a year-on-year increase of 25.5%. It is urgent to build the "safety base" of intelligent networked cars. In this regard, China has issued a number of relevant laws and regulations, and issued and implemented a number of vehicle safety standards and more than ten platform safety standards.
How to ensure safety and distinguish authenticity in AI era
In addition to the unmanned field, many lawless elements also use AI to change their faces and voices, infringing on citizens’ personal privacy. A "portrait photo" and a close-up video of people may become new tools for criminals.
"Face-changing" forged indecent photos, extortion by criminals
Not long ago, Mr. Wu of Shenzhen suddenly received a multimedia message from a strange mobile phone number. When he clicked on it, it turned out to be an indecent photo of him and a strange woman in the hotel room.
Party Mr Wu:He threatened to contact me before what time to transfer money to him. As soon as I thought something was wrong, I called the police.
After receiving the report, the police immediately conducted an investigation. By combing the report records, the police found that Mr. Wu was not the only one who had the same experience. With the deepening of the investigation, the police found that this was a criminal gang with a clear division of labor. Some people are responsible for retouching, some are responsible for finding targets, and some are responsible for extortion. Subsequently, the police launched a unified network-receiving operation and successfully arrested more than 10 suspects headed by Wang.
Wei Zhang, a policeman from Pingshan Branch of shenzhen public City, Guangdong Province:Relevant evidence extracted from the suspect’s computer, these pictures are the victim’s picture information collected by the suspect from various channels such as the Internet, and then they are synthesized, which is the indecent photos synthesized by the suspect and used to blackmail the victim.
It only takes one photo to confuse the fake with the real.
So how do these fake photos get the real thing? Network security experts demonstrated the technology of "AI face changing" for us, which only needs a static photo of the target character to make the characters in the photo move.
Jaco, member of the Professional Committee of Artificial Intelligence Security Governance of China Cyberspace Security Association:The rightmost side of itself is a static picture that won’t move, so after our (presenter) does some actions, it (AI face-changing software) collects the moving features of (presenter) face, and integrates this feature into a static picture, so that the picture moves according to our (presenter’s) actions. At present, AI face-changing technology is still relatively mature, and it can also support real-time face-changing of video calls. If it is combined with changing the background environment, changing the sound, etc., the fidelity will be higher and it will be more difficult to distinguish.
Accelerate the development and application of related countermeasures technology.
In view of the abuse of technologies such as "AI face-changing", a few days ago, the National Network Security Standardization Technical Committee released the 1.0 version of the "Artificial Intelligence Security Governance Framework", proposing corresponding technical responses and comprehensive prevention measures, as well as guidelines for the safe development and application of artificial intelligence. In this regard, network security experts suggest that we should speed up the development and application of related countermeasures and use AI to govern AI.
How to ensure safety and personal privacy in the AI era
To guard against the threat posed by AI, what should we ordinary people pay attention to in our daily life besides upgrading our efforts in technology?
Avoid excessive disclosure of information such as face fingerprints.
According to experts, the most critical reason for AI’s face-changing and other illegal acts is the disclosure of personal privacy information. Therefore, we should try to avoid excessive disclosure and sharing of personal biological information such as faces and fingerprints in the daily online process. If you receive a video that looks like "family" and "leader" and other transfers and remittances, you must carefully identify them.
Be alert! Don’t send these three kinds of photos casually
In addition, experts suggest that the following photos should not be posted on the Internet.
First, don’t publish hand-held ID cards or hand-held white photos, which will contain sensitive information such as personal name, ID number and address.
Second, don’t post photos of all kinds of bills. Many people like to take train tickets or plane boarding passes when traveling on business and travel, and send them to their circle of friends to share. These bills will not only contain information such as name, place of departure, destination, etc., but also the personal information of passengers in the stripe code or QR code, which may be cracked and stolen by criminals.
Third, don’t take photos such as home keys and license plates. Through big data and other technical means, criminals will analyze the specific location of the publisher at a specific time through these photos, and will also obtain the relevant living habits of the publisher.