CYBER SECURITY

Google experts explore open source security…

[ad_1]

An open source security incident brought discussions about supply chain security and open source project management flaws.

As more and more organizations rely on open source components in their software, the issue of protecting these components becomes more and more urgent.

This is the premise of an event hosted by Google today. During this period, open source experts discussed the myriad challenges of protecting open source software, the company’s priorities, and the measures the industry can take to improve the overall state of open source security.

Synopsys data shows that the average software application depends on at least 500 open source libraries and components, an increase of 77% from the 298 dependencies in two years. Open source libraries and components account for more than 75% of the average software application code, 84% of applications have at least one vulnerability, and there are 158 typical applications.

In a speech on open source supply chain security, Google software engineer Dan Lorenc suggested that organizations understand what they are using-he admitted that this step seems obvious but not easy, especially when developers start to build and release artifacts, and Enter other cultural relics when combining artifacts. When a vulnerability is reported, whether it is accidental or malicious, not knowing what is running can cause you trouble.

“Control when adding dependencies,” he said. Governance and continuous auditing of new dependencies, whether internal or open source, is a good way to protect software.

This control can be extended to build the components you use, Lorenc continues, noting that this is also a difficult step for most organizations. In most cases, the contents of binary packages are difficult to verify. He added that it does not need all or all of it, but part of the open source code is building and compiling it. Knowing that you can build it when necessary is half the battle and shows that you can control the code that enters the application.

“Open source software is software,” Lorenc said. “It’s full of errors; it’s full of CVEs that can be exploited.” Although some of these errors don’t cause much damage, some can be harmful.

Lorenc emphasized that organizations should develop plans to deal with zero-day vulnerabilities and known defects. Zero-day vulnerabilities are flashy and exciting mistakes that usually make headlines. Companies should have an emergency manual to quickly fix them, but older vulnerabilities may not attract the attention they deserve. In large organizations that run a large number of environments and systems, these flaws can easily be overlooked.

“Just because you forgot it doesn’t mean the attacker won’t find it,” he continued. “These things are easy to find from the outside.”

He said that organizations must keep track of the open source software they are running and constantly update it, noting that this is generally considered “crappy” and “boring” work and is usually not rewarded. Lorenc recommends automating, monitoring and tracking the process to make it as simple as possible.

“This is a problem that everyone should worry about,” he said of the known vulnerabilities.

More broadly, the industry can better find and fix unknown errors.

“Regulate upstream work in the projects you use,” Lorenz said.

“Upstream” refers to the direction of the original software author or maintainer. There is a common misconception that because the code is on GitHub and has been well-reviewed, it is error-free. He said this is not true, “fixing upstream errors can help build important bridges and improve the public interest.”

Open source vulnerability disclosure: process tips
In a separate speech, Google project manager Anne Bertucio explained the process of verifying, communicating, and recording vulnerabilities for open source project managers to meet the needs of open source project/product owners and those who report defects.

She said that, first of all, it shouldn’t be difficult for people who discovered the vulnerability to contact the Vulnerability Management Team (VMT). Bertucio said the team may decide to use a common tool or a tool they have already used, but the email is very good and works well as a backup option. The security policy should be accessible and include what to explain in the error report and the expected response. If it takes three days to confirm the submission, please tell me directly.

From there, the problem is confirmed and verified. Project owners should ask reporters whether they are willing to help develop the patch, whether they are willing to be included in the CVE, and whether they agree with their disclosure schedule.

Bertuccio said: “Journalists really like to see things being revealed and named as quickly as possible.” Although 90 days is the standard, it is important to figure out what works for both parties.

She added that when it comes to disclosure, the safety advice should be factual and short-a straightforward statement of what people need to know and how to mitigate it. If you want to share the story of the error found and how it works, please write it in a separate blog post.

Bertucio said that there is no point in hiding the details of the vulnerability, and pointed out that “security through obscurity is not true security at all.” Similarly, she said that there is nothing wrong with having a large number of CVEs for a particular open source project. She added that this means that you have a strong response to disclosure of deficiencies and strengthening projects.

Kelly Sheridan is a full-time editor of Dark Reading. She focuses on cybersecurity news and analysis.She is a business technology reporter. She has previously reported on Microsoft for InformationWeek, and has reported on finance in insurance and technology… View full bio

Recommended reading:

More insights

Related Articles

Back to top button