Indonesian Political, Business & Finance News

When Digital Machines Decide Human Fate

| | Source: KOMPAS Translated from Indonesian | Regulation
When Digital Machines Decide Human Fate
Image: KOMPAS

In recent years, artificial intelligence (AI) has entered decision-making spaces. This phenomenon is evident in employee recruitment, financial services, and digital public services. The process is swift, but the reasoning behind decisions is often opaque.

A 2025 Jobscan report notes that 99 per cent of Fortune Global 500 companies use Applicant Tracking Systems (ATS) to screen job applicants. Job seekers’ CVs are scanned, matched against keywords, and candidates are either approved or rejected within seconds. For companies, this is efficient.

However, for applicants, it feels like a verdict without trial. There is no interview, no opportunity to explain context, and often no feedback. Rejection without explanation can break spirits and gradually erodes applicants’ confidence.

The Financial Services Authority (OJK) records outstanding online lending at Rp87.61 trillion as of August 2025, with a 90-day default rate of 2.60 per cent. Although this figure remains below the 5 per cent threshold, behind these seemingly stable numbers are small and medium-sized enterprise operators rejected without knowing why. They do not know what must be improved for approval on their next application. They cannot contest the evaluation and have no channel to communicate changes in their circumstances.

For instance, a business may have recovered and cash flow improved, or the business model may have changed, yet the old track record still casts a shadow. As a result, viable business plans are delayed, expansions fail, and some revert to informal lending with exorbitant interest rates.

This phenomenon shows that AI, originally a support tool, has become a gatekeeper. The machine becomes the initial filter determining who may proceed and who must stop at the gate. Citizens ultimately face a cold algorithmic screen rather than a friendly human face.

When policy is executed by machines, the relationship between those in power and those subject to power shifts. Affected parties only receive the final outcome. They lack equal access to examine the basis of the decision.

The state has already responded to concerns about accountability. For instance, OJK issued POJK 29/2024 on Alternative Credit Ratings, which provides a framework for governance, supervision, and compliance for technology-based rating. The Ministry of Communication and Information also issued Circular Letter No. 9/2023 on artificial intelligence ethics. Business actors must ensure that system decisions can be explained, audited, and accounted for to affected parties.

However, regulation alone is insufficient if accountability is “lost” during implementation. Real and communicative responsibility holders are needed. For decisions affecting citizens’ rights and livelihoods, AI should be used as part of governance, not as a single standard.

At minimum, three safeguards need testing. First, explanations that ordinary people can understand, not merely numbers or scores. Second, real correction channels that are easily accessible and do not go in circles. Third, an accountability mechanism if the system proves wrong or causes harm.

View JSON | Print