Cracks in Trust for JAKI...
Amid efforts to build a responsive city through technology, a small flaw has opened up a larger issue: public trust. The case of alleged photo manipulation in reports by Jakarta Transportation Agency (Dishub) officers on the Jakarta Kini (JAKI) app serves as a reminder that digital systems are not just about sophistication, but also integrity. This alleged manipulation case adds to the long list of flaws in JAKI, following the earlier emergence of AI-generated response photos for citizen reports on illegal parking in Kalisari, East Jakarta. The locations and situations in both photos are the same, but the times differ. Behind this similarity, it is not impossible to raise questions about how valid the reports have been that form the basis of government responses so far. The DKI Jakarta Provincial Government does not deny the findings. The Head of the Communication, Informatics, and Statistics Agency (Diskominfotik) of DKI Jakarta, Budi Awaluddin, confirmed that the manipulation practice did occur and is currently being processed. “That is true (there are officers manipulating reports with fake timestamps), and we have also found (the evidence). Currently, the case is being processed and we will coordinate with the inspectorate for examination,” Budi said in his official statement on Wednesday (8/4/2026). JAKI, which has so far been the main channel for public complaints, is now being tested for its reliability. Budi emphasised that improvement steps will be taken immediately. The system will be strengthened, both in terms of technology and supervision. Photo validation will be tightened, the use of real-time documentation will be encouraged, and the development of features to detect manipulation, including those based on artificial intelligence. “Regarding this, we are strengthening the JAKI system, both in terms of technology and supervision. In the future, there will be enhancements such as stricter photo validation, the use of direct field documentation (real-time capture), as well as the development of features to detect potential manipulation, fraud, or AI,” he explained. This step is considered important to maintain the integrity of the system while restoring public trust.