ESG:

S is for Social

In our discussion of ESG broadly and environmental impact specifically, we have repeatedly encountered the idea that what’s good for an organization’s profit is often good for its ESG metrics as well. For many companies, this is especially true of the S in ESG, which typically refers to social impact as well as, for some investors, human capital concerns.

Social Impact

Social and human capital are historically related to the traditional concepts of brand value, goodwill, and labor relations. Today, investors, employees, and customers often ask more generally how an organization’s decisions affect both its internal stakeholders and broader society. In some cases, these effects can be in direct response to its products or services; for example, companies whose offerings are viewed as unsafe or harmful, either by design or due to quality issues, are often graded poorly. In other cases, the social impact is measured by how an organization interacts with its employees, contractors, supply chain partners, and the communities in which it does business. Unsurprisingly, organizations that maintain good relations with their personnel and community partners often suffer less conflict, reducing legal expenses, risk of regulatory fines, and employee turnover.

There is often some overlap between “S” and the other two pillars of ESG. For example, a food product company that decides to source its supply chain ethically may develop guidelines for ensuring that suppliers meet their standards (Governance). When these standards include considerations for worker safety, they might result in the use of less toxic chemicals in fields or factories. This contracting standard change by legal or procurement, typically viewed as a “G” activity, also has a positive impact on the health of workers (“S”) and the water and soil (“E”). Frequently, large companies can initiate cascading change through such “boring” contractual tools.

Technology and Professional Services

For technology and professional services organizations, social and human capital impacts often come from the use or misuse of data. For example, almost all “cloud” services or software-as-a-service (SaaS) businesses receive and store information about individuals and organizations. When this information is “lost” or inappropriately accessed or sold, the individuals and organizations might be harmed; these concerns can become even more acute when the nature of the information is highly sensitive, such as for health information, or when the individuals are at-risk or children. Protecting against such harm requires a strong commitment to data privacy and information security, and organizations that chronically underinvest in these areas tend to suffer as a result of data breaches or brand reputation damage.

In order to manage the wide array of statutory and contractual obligations related to data privacy, organizations need to invest in building strong, technology-enabled data privacy and information security programs. For example, organizations can demonstrate their commitment to data privacy by using static analysis to identify problematic data model fields or by scanning real database records for possible issues. Many organizations also eventually obtain a certification such as an ISO 27001 or SOC 2, often as a result of customer contractual requirements.

We developed and open-sourced a Responsible Data Science Policy Framework to help companies ensure that their data science activities are conducted responsibly from a technical, legal, and ethical perspective

Algorithms

Some organizations also encounter social impact considerations when they systematize decisions via algorithms, such as when banks or other credit-offering institutions develop models to assist in underwriting processes or when a recruitment firm uses filters to screen for potential candidates. While much recent attention has been focused on possible bias in machine learning data sets and models, it is important to note that such concerns date back long before modern data science; in fact, the Equal Credit Opportunity Act (ECOA), enacted in 1974, and its related disparate impact analyses provided a framework for managing social impact long before random forests or transformer models were introduced. In general, companies building algorithms should ensure that data privacy and ethical review are incorporated into their research process as part of a holistic data science maturity assessment. By developing such a mature data science program, companies can validate their commitment to minimizing social harm and maximizing social good related to their activities.

Organizational Culture

Lastly, one of the most common issues faced by technology and professional services companies relates to employee turnover. Like a factory constantly changing parts and plans, organizations that are constantly replacing personnel are likely to suffer. Developing an attractive culture – as well as compensating competitively – can help organizations score well on human capital metrics and increase the productivity of their teams. For example, organizations that invest in tools like professional IDEs often have more productive developers who are able to focus on interesting problems instead of trivial busywork. Some organizations are even using automated static or dynamic analysis tools to help identify developers and teams who create more value – and thus deserve more shared reward. Many of these issues are addressed as part of a comprehensive technology maturity assessment or a more focused software development maturity assessment.

Technology-focused or not, organizations are increasingly expected to prove that their actions are minimally harmful to the people they partner with and the customers and communities they affect. .

Finish reading this series as discuss the G in ESG: Governance or return to read our Introduction or E for Environment.