The Dangers of Sharing Financial Information On AI Sites Like AutoGPT, ChatGPT

The Dangers of Sharing Financial Information On AI Sites Like AutoGPT, ChatGPT

It is advisable not to share mobile numbers, bank account details, PAN card information, and other sensitive data on AI platforms to protect against fraud.

Joshua Browder, the CEO of DoNotPay, a robot lawyer startup, recently shared his experience of handing over his entire financial life to OpenAI’s GPT-4. While he claimed to have saved significant money and recovered lost or unclaimed money using AutoGPT, his experience raises questions about the potential risks of using artificial intelligence (AI) platforms.

The rise of AI has brought about many exciting developments in various industries, including personal finance. With AI-powered sites like AutoGPT and ChatGPT, it's now possible to automate many tedious tasks associated with managing your finances, from creating budgets to finding lost money. However, while these platforms may offer numerous benefits, it's essential to be aware of their potential dangers.

Privacy Risks: One of the most significant concerns with AI platforms is their privacy risks. When using these tools, it's important to understand that your personal information may be shared with third parties. User profiling is one such risk. AI platforms can analyze your behaviour and create a user profile that can be used for targeted advertising or other purposes.

Data sharing is another issue. AI platforms may share user data with other businesses or organizations for research or advertising. Hence, third-party access is also a concern. AI systems may give access to user data to outside developers, who could exploit it for their own reasons.

Finally, there is the risk of a data breach. AI platforms are not immune to data breaches. So unauthorized individuals could access your personal information. These breaches are possible due to security flaws in the system, such as weak passwords or unencrypted user data.

Explains Ritesh Bhatia, V4WEB Cybersecurity, cybercrime probe, and digital forensics firm: “One of the main privacy risks of GPT is the possibility of exposing sensitive or confidential information contained in the data it is trained on. GPT is trained on large amounts of text data, which may include personally identifiable information (PII) such as names, addresses, and other personal details. If GPT is used in applications that involve processing personal data, such as customer service chatbots, there is a risk that this information may be leaked or misused.”

“When using GPT-powered services, be cautious about the information you provide, especially if it involves personal or sensitive data. Consider whether you need to share certain information, such as medical or financial information, and if you do, ensure that the service provider has appropriate data protection measures in place,” adds Bhatia.

Legal Recourse: India has a law on safeguarding sensitive personal information, which falls under the classification of section 43A of the Information Technology Act. This section mandates that any organisation that receives personal information from an individual must take

responsibility for protecting the data through due diligence and reasonable security practices. Furthermore, the organisation is accountable for compensating the affected individual if there is a security breach or data leak. However, suppose the individual willingly shares their personal information as per the organisation’s privacy policy and gives consent for the information to be shared with a third party. In that case, they relinquish their right to take action against the organisation.

Says Abhishek A Rastogi, founder of Rastogi Chambers, who is helping several clients in cyber frauds and other related disputes: “Any artificial intelligence, or for that matter any platform which hosts such intelligence, can be a threat as it may have the capability to provide personal data which is not in the public domain. When any platform pulls data that is not in the public domain and provides such information on being searched on an artificial intelligence platform, there is a high likelihood of infringement of rights, and such an act of the platform can be a subject matter of dispute before courts. Accordingly, it is important that the personal data is not posted on public sites where artificial intelligence platforms can access it.”

The other related aspect concerns fraud, which may happen because of the gathering of information from different sources, including artificial intelligence platforms. When a person gathers data from various sites, including that from the artificial intelligence platform, there is a high likelihood of financial fraud. “Accordingly, it is always advised that robust passwords must be kept, and there must be a two-step verification process for any financial transaction. The victim must never be shy of sharing information about the fraud, either with relatives, friends, or cyber cells. This will increase the awareness and help the public,” adds Rastogi.

Related Stories

No stories found.
Outlook Business & Money