Data Privacy - Tech Wire Asia https://techwireasia.com/tag/data-privacy/ Where technology and business intersect Mon, 17 Jun 2024 04:44:40 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 Securing Data: A Guide to Navigating Australian Privacy Regulations https://techwireasia.com/06/2024/securing-data-a-guide-to-navigating-australian-privacy-regulations/ Mon, 17 Jun 2024 04:41:29 +0000 https://techwireasia.com/?p=238794 In the ever-changing data security sector, privacy regulations undergo continuous change. With increasing data breaches and cyber threats, organisations, particularly those in the financial services sector, need to prioritise data protection and comply with privacy regulations. Your data is valuable, and understanding Australian privacy regulations is crucial for safeguarding your organisation’s sensitive information and maintaining customer trust.

The post Securing Data: A Guide to Navigating Australian Privacy Regulations appeared first on Tech Wire Asia.

]]>
Guest Writer: Louise Wallace, The Missing Link

In the ever-changing data security sector, privacy regulations undergo continuous change. With increasing data breaches and cyber threats, organisations, particularly those in the financial services sector, need to prioritise data protection and comply with privacy regulations. Your data is valuable, and understanding Australian privacy regulations is crucial for safeguarding your organisation’s sensitive information and maintaining customer trust.

Understanding Australian Privacy Regulations

Firstly, let’s examine Australia’s regulatory environment. Recognised globally for its stringent data privacy laws, Australia imposes specific obligations on businesses, including financial institutions, regarding data management. As well as general compliance with data protection laws, financial institutions in particular have to deal with large quantities of confidential client data which subjects them to additional scrutiny.

Source: Shutterstock

APRA Compliance and Requirements

Financial institutions regulated by The Australian Prudential Regulation Authority (APRA) are required to comply with Australian privacy regulations. Of particular importance is compliance with CPS 234, an information security standard designed to mitigate cyber threats. Compliance involves the implementation of security measures such as asset classification and incident detection, bolstering data security and fostering a more secure digital landscape.

Top 5 Tips for Security Best Practice

1. Start with an Assessment:

  • Know your environment
  • Evaluate how your organisation handles data security and privacy risks
  • Identify possible vulnerabilities, compliance gaps, and critical data

2. Protect Critical Data:

  • Regularly conduct access reviews
  • Implement Multifactor Authentication
  • Control access based on roles
  • Review security controls regularly to protect sensitive data
  • Limit access to authorised employees to avoid personal information being shared or misused

3. Monitor and Review Data Access:

  • Use reliable controls to monitor and audit systems
  • Ensure effective tracking of data access, modifications, and transfers within your organisation’s network.
  • Regularly review audit logs to detect suspicious activities, unauthorised access attempts, and potential risks to sensitive data.

4. Encrypt Sensitive Data:

  • Apply suitable encryption for data storage and transmission
  • Cleanse the data as needed
  • Monitor data usage through auditing

5. Employee Training and Awareness:

  • Cyber security is crucial in the financial industry due to increased online banking and digital transactions.
  • Protecting customers’ sensitive financial information against cyber threats is a top priority for financial organisations.
  • Conduct security awareness training to educate employees on privacy regulations, cyber security threats, and the importance of following cyber security best practices. This helps create an organisational culture of privacy and security awareness

Safeguarding Data while Optimising Business Operations

Understanding your organisation’s day-to-day operations will set the foundation for optimising business operations and security decision-making. Context is key in this process, as a one-size-fits-all approach fails to consider the unique risk profile of each business.

As a cyber security professional, collaborating with key stakeholders (legal, compliance, and IT teams) is vital. Together, you can create a comprehensive privacy compliance strategy that aligns with your organisation’s objectives and optimises business operations. Balancing data protection and business operations is essential for a robust security framework.

Here are some tips to achieve this balance:

  • Privacy by design: Make sure privacy is a big part of your systems and processes right from the beginning. Build your systems with privacy in mind, so they work seamlessly to keep data safe.
  • Data Minimisation: Only collect the necessary personal information needed for your business. Avoid asking for unnecessary data that could increase the risk of a data breach or privacy violation.
  • Regular Audits and Assessments: Regularly check your systems to find privacy gaps or weaknesses. This proactive approach helps address issues before they become significant problems.
  • Incident Response Plan: Develop a plan outlining steps to be taken in case of a data breach or privacy incident. Include processes for notifying affected individuals, regulators, and stakeholders.

Source: Shutterstock

Strengthening your Organisation’s Security

Improving cyber security isn’t just a prudent decision but an imperative one. As privacy regulations continue to evolve, it’s crucial to maintain a persistent approach to cyber security and data protection compliance. Unfortunately, it’s not a one-time activity.

Understanding Australian privacy regulations and adhering to industry standards is essential for maintaining data security and consumer trust. By incorporating privacy principles into system design, conducting regular assessments, and implementing incident response plans, you can enhance your organisations compliance by following cyber security best practices.

Managed security services are valuable for safeguarding your IT assets. The Missing Link understand the importance of cyber security and are dedicated to assisting businesses in securing their operations and data.

The Missing Link offer more than standard services. They provide advanced threat detection, incident response, and security reporting that fit your business. Their cyber security solutions are made to keep your organisation, data, systems, network, and users safe. This helps strengthen your security capability and gives you peace of mind as you drive your business forward.

How secure is your organisation?

If you want to boost your cyber security but don’t know how to begin, take The Missing Link’s cyber security self-assessment. This will help you measure your capabilities across critical functions such as cyber defense, security governance, architecture, and risk management.

Take the free security assessment today!

The post Securing Data: A Guide to Navigating Australian Privacy Regulations appeared first on Tech Wire Asia.

]]>
Ethical Threads: Transforming Fashion with Trust and Transparency https://techwireasia.com/06/2024/cx-customer-experience-data-privacy-new-opportunities-affinidi-mens-clothing-india-wellbi/ Thu, 13 Jun 2024 01:08:26 +0000 https://techwireasia.com/?p=238819 We interview Wellbi, ethical clothing range, and Affinidi, creator of the Affinidi Trust Network, about their work together and how the two companies are building a privacy- and business-focused future.

The post Ethical Threads: Transforming Fashion with Trust and Transparency appeared first on Tech Wire Asia.

]]>
Online fashion and apparel trading is probably one of the most hotly contested markets in the world. Creating a brand that’s differentiated clearly from its competition is a tough call in any vertical, and the clothing industry’s reputation has suffered when businesses compete purely on price.

Consumers are aware that companies offering cut-price, fast fashion have likely cut corners and abused their supply chain, with horrendous reports of manufacturers’ working conditions making headline news. Consumers increasingly vote with their wallets and deliberately make ethical choices when shopping online. Being an ethical clothing brand and doing so demonstrably is a win-win – it’s a massive competitive differentiator, consumers feel great about their chosen vendors, and workers in the supply chain don’t get exploited.

There remains another challenge, however. If a clothing brand (or indeed, any company) wants to act ethically in every aspect of its activities, it has to examine the ways it provides customer experience.

Creating a customer experience (CX) is a critical area for online companies today, and it hinges on providing personalised and helpful interactions between the seller and buyer. But for the vast majority of the world’s ethical brands, their ethical practice stops here. In previous articles here on Tech Wire Asia, we’ve discussed how many businesses collate third-party data belonging to customers and prospects and form largely inaccurate and privacy-invasive pictures of their customers and prospects. These are then used to create messaging for touch-points that are not only a waste of costly resources but actively alienate users.

Source: Wellbi

This happens when consumers feel that brands know too much about them (which is borderline creepy), are inaccurate (wasted messaging), or are accurate but irrelevant. The latter case is very common: a customer is tagged by third-party data as interested in golf, for example, and so all data-driven fashion suggestions for that customer comprise eye-watering colours.

As part of our exploration into how relevant information, given consensually by customers, can craft customer experiences and the massive advantages this offers, we spoke to Supreeth Kashyap of Wellbi, an ethical clothing brand operating in India. Wellbi carefully sources the hand-woven fabrics used in its range from artisans across the country, paying them a premium rate for their labour and guaranteeing them a consistent commission for their work.

Source: Wellbi

“Basically, I started with the with a vision of empowering rural artisans,” said Supreeth, the founder and CEO of Wellbi. “When I was criss crossing rural India, I found a group of artisans who were producing the finest fabrics, but they didn’t have a good market linkage for the product. They were earning less than three dollars [a day]. Textiles is the second largest employer in India, just after farming and agriculture. […] Before, they were getting work for only two to ten days in a month. Right now, they’re getting work for close to the entire month, 30 days. So now things have changed for them considerably.”

It’s a company that takes its ethical values into every aspect of its operations, including managing its customers’ identities, from which it can create a level of trust in the Wellbi brand that informs the customer experiences it creates.

Source: Wellbi

The concept embraced by Wellbi and many other brands that genuinely care for their customers is that of zero-party data – that’s information willingly exchanged by customers and prospects with the company in a way that ensures their privacy and security. Wellbi uses Holistic Identity Management powered by the Affinidi Trust Network (ATN). You can read more about the concepts of Holistic Identity Management and the ATN in previous articles we’ve published on this site.

By creating a customer experience based on consent-driven data, it formulates the critical description of each individual based on information relevant to the customer’s interactions with the brand. As trust and loyalty build, the picture gains detail, and the quality of CX improves in ways that are non-invasive, secure, and consensual.

Glenn Gore, CEO of Affinidi, told us, “It comes back to a community of like minded consumers working with like minded businesses – people who care about buying organic goods, they’re likely doing it across multiple facets of their life. They’re buying organic beauty products, they’re buying organic fashion. So I think part of [the Affinidi Trust Network] is just connecting brands and businesses that are very focused on that [ethos]. I think there’s a lot of ‘green washing’ out there in the industry for these things. So being able to have that trust network where […] you connect through, saying, ‘We’ve got consumers who want to spend money on these types of goods’. [Consumers] make a conscious decision about investing their money in these types of causes. Businesses act as a marketplace, as gateways that [provide] a discovery mechanism for these very small-batch products. I think is actually what consumers are looking for.”

The concept of zero-party data and the ATN mean that Wellbi can find new audiences that resonate with its ethical ethos and brand vision. With their permission, online shoppers whose personal preferences align with Wellbi can be offered opportunities with similar companies, safe in the knowledge that their personal data is treated ethically.

At present, it’s anomalous that companies that trade on their ethical stance still act unethically when it comes to customer information management, building ‘insights’ from data that they have no consent to use, on prospects and customers whose data is treated with little respect for the values held by the brand and the online citizen.

The poor click-through rate of ‘traditional’ platforms like Meta is around the 2-3% mark, which is indicative of how inaccurate most companies’ perceptions are of their users. With the Affinidi Trust Network, brands can develop CX according to information dictated by the customer, not privacy-invasive and inaccurate data. That promotes an immediate trust on which a meaningful CX can be created. “But there’s not many things we spend money on as businesses where a 98% failure rate is normal. […] You don’t need to shift that needle much. A 1% difference is actually a 50% improvement,” Glenn said.

To find out more about Wellbi and browse its ethical range of menswear, head over to its online store. You can read more about Affinidi Trust Network here and start to create an ethical and powerful customer experience that is truly and accurately personalised.

The post Ethical Threads: Transforming Fashion with Trust and Transparency appeared first on Tech Wire Asia.

]]>
Building trust in the data economy: Enerlyf and Affinidi redefine CX, privacy and energy efficiency https://techwireasia.com/05/2024/the-affinidi-trust-network-building-better-cx-based-on-privacy-and-excellence/ Mon, 20 May 2024 06:56:22 +0000 https://techwireasia.com/?p=238731 Startup Enerlyf is using Affinidi’s technology to create new markets based on trust, mutual advantage and respect for users’ privacy. With Glenn Gore, CEO of Affinidi.

The post Building trust in the data economy: Enerlyf and Affinidi redefine CX, privacy and energy efficiency appeared first on Tech Wire Asia.

]]>
Like every company operating today, startup Enerlyf knew it had to create a world-beating CX (customer experience) for its users. As a product, its premise is compelling: an independent control system for domestic aircon units that synchronises with ceiling-mounted fans to reduce a household’s energy consumption – a critical saving in founder Chirag Panchal’s home country of India.

The lack of connectivity to the cloud is a core aspect of Enerlyf’s current vision for its CX. “Customers don’t need to share any data and they are still able to get their own personalised temperature and energy saving. That was one of the key things we identified in its popularity,” Chirag told us.

The company’s invention saves its customers up to 35% on their home cooling costs, enough that over two nights, the equivalent electricity could cover two rural households’ energy needs.

The lack of need for potentially privacy-invasive ‘smart home’ technologies comes at a time when data privacy is becoming increasingly central in many consumers’ minds. Data aggregated from multiple sources can be used to influence our behaviour in ways we find disturbing, and when those influences are traced back to a product purchase or service sign-up, the effects can easily negate any investment a brand has made in its CX.

That’s because, at present, there is an imbalance in our current concepts of CX, one that loses companies loyal customers and destroys trust carefully built over months and years.

The unevenly weighted scales

When buying a product or service, and in every interaction with an organisation, we expect a quality CX. It’s part of the reciprocal arrangement entered into at the point of purchase, with the standards of our expected experiences set high by global household names like Uber, Amazon, Rakuten and Ola.

But in addition to the quid pro quo, many companies will additionally monetise or even abuse the data they gain from every interaction. By piecing together information gathered with that from third parties, companies build detailed yet often irrelevant pictures of their buyers or users. Too often, their reasons for doing so have little to do with improving CX or the product now in the customer’s hands. Instead, data is used as a secondary revenue source, the benefits of which never reach the customer.

Source: Shutterstock

Digitisation without tears

Like every product, iterating on Enerlyf’s core designs is how it will improve. To achieve this, Chirag knows that user data can be incredibly helpful for each of his customers. “[We want to] add internet connectivity, IoT capabilities and AI to our systems, so that we can build to make greater systems. When we envisioned adding IoT [functions] and AI, that is where Affinidi came in and [our] mission and vision becomes much stronger,” he said.

Affinidi’s vision for how data can be shared anonymously is central to what Chirag perceives as the next generation of customer experience for Enerlyf customers. With zero-party data (first-party or user data whose source is not identifiable by a third party – see our previous articles here and here for more) the customer experience for Enerlyf’s products can give each customer advantages that will not compromise their personal information.

Initially, Chirag aims to produce personal and community value for Enerlyf users via the Affinidi Trust Network. “So say for example, if there is a community with 1000 apartments, and we have 200-500 users using our product, we wanted to offer access to a local weather station for them. […] For example, if parents want to take their children out, they can immediately check what is the air quality of that area? What is the outside temperature, humidity, so many other things. So that is a roadmap Enerlyf is connecting, like user-personalised profiles with community-level data.”

Affinidi’s CEO, Glenn Gore sees further benefits for Enerlyf user communities that are both empowered and protected by the Affinidi Trust Network (see here for more details): “Air con repair services could do reverse bidding, for example, saying, ‘We know you run your air conditioning for 300 hours, by giving it a service, it’s going to be more efficient, you’ll save some energy.’ These are new techniques that people could use, but while maintaining user anonymity.”

Customer experience at present goes little further than easy-to-use GUIs (graphical user interfaces) and personalised recommendations to buy more product (‘Hey [name], you bought [x], so why not buy [y]?’). What Enerlyf and Affinidi envisage is a CX where the ‘C’ for Customer is writ large – realisable benefits that come from anonymised, specific data deliberately released by individuals and companies to others, with both parties gaining.

Source: Shutterstock

National level advantages

The Indian government’s Green Credits scheme is an example of a nascent market that can be seized on by entrepreneurs like Chriag. He sees the users of Enerlyf products as being able to prove – without compromising their personal information or identities – exactly how much power their activities have saved and be able to ‘spend’ those credits elsewhere.

Glenn said, “There is value in consumers being able to say to brands, ‘Hey, I am taking actions in my own life.’ And brands can turn that into giving you savings or opportunities or preferences based on the decisions being made.”

New definitions of CX

End-users confident of their privacy are so much more likely to trade with brands they know are respectful of data security and anonymity. Customer experience on the Affinidi Trust Network is self-determined by the individual and goes far beyond our present concepts of CX. In 2024, CX is too often designed not for the benefit of the customer, but as a crude and often unwanted opportunity to cross-sell and upsell.

Instead of a reluctance to interact with a brand because of the potential for personal information misuse, brands can build trust with customers and prospects based on information that people approve for release to specific organisations.

As Enerlyf iterates on its energy-saving product line, it’s changing the way we think about personal well-being and energy-saving. With its cutting edge ClimateOS and state-of-the-art CX, Enerlyf allows households to reduce energy consumption and transform into fully-connected, distributed energy resources in the wider energy grid. Doing so offers the potential for enormous new markets and a more sustainable future for all of us.

The Affinidi Trust Network is the basis on which we will see this new data and energy economy built and the new definition of customer experience emerge. You can find out more about Affinidi Trust Network here.

The post Building trust in the data economy: Enerlyf and Affinidi redefine CX, privacy and energy efficiency appeared first on Tech Wire Asia.

]]>
After all the drama about security and privacy, Biden joins TikTok  https://techwireasia.com/02/2024/after-all-the-drama-about-security-and-privacy-biden-joins-tiktok/ Wed, 14 Feb 2024 01:00:54 +0000 https://techwireasia.com/?p=237807 After all the noise about security and privacy, US President Joe Biden joins TikTok. President Biden joins TikTok to reach younger voters, which prefer using the social media app for news.  TikTok has over 150 million users in the US. TikTok continues to face all types of challenges from the Biden Administration. And every time,... Read more »

The post After all the drama about security and privacy, Biden joins TikTok  appeared first on Tech Wire Asia.

]]>
  • After all the noise about security and privacy, US President Joe Biden joins TikTok.
  • President Biden joins TikTok to reach younger voters, which prefer using the social media app for news. 
  • TikTok has over 150 million users in the US.
  • TikTok continues to face all types of challenges from the Biden Administration. And every time, the social media giant responds to each challenge and allegation in a positive manner.

    With over 150 million users in the US, TikTok continues to be as transparent as it can be in everything it does, despite facing significantly more rigorous investigation than the likes of Facebook or X. While the US government has accused the app of being used for espionage by China, TikTok continues to prove otherwise, highlighting and showcasing all the steps it has taken to ensure the security and privacy of those using its app – and their data.

    Most recently, TikTok CEO Shou Zi Chew was grilled by US senators on his ties with China. The Singaporean CEO patiently answered every query, while stating that neither he nor the app shares any data or information with China.

    Can we say “McCarthyism much?”

    The reality is that while the app does collect data, it’s mostly to improve its algorithms. All social media apps collect data to improve their algorithms. This helps the app improve its performance and cater to users’ needs.

    Despite proving that the app does not share data with China, several states in the US have banned the use and download of the app on government devices. Several universities in the US have also banned the use of the app on campus.

    In 2022, US President Joe Biden signed legislation blocking most federal government devices from using TikTok. Lawmakers from both sides of the aisle continue to call for the app to be banned wholesale in the US over concerns the government in Beijing might be able to access user data.

    Still, the site remains popular with young people in the US, a demographic that the White House is keen to energize for this November’s election. In fact, TikTok is not only capable of influencing the US election, it is also now the most popular source for news by the younger generation.

    According to a report by Morning Consult, 47% of TikTok users say they get news from the app, compared to 30% in July 2022. Younger generations have become more accustomed to using this social network to follow the news – particularly in the wake of the “Muskification” of the platform everyone is still deadnaming as Twitter, and the return of extreme right wing accounts to that platform in the name of “free speech.”

    US President Joe Biden is officially on TikTok.

    US President Joe Biden is officially on TikTok. Election? What election?

    Biden joins TikTok

    On the surface then, it’s a bizarre twist that US President has now joined TikTok.

    That’s right. Take a moment to absorb that information. For all the noise made over the last few years and the concerns they have had over the app, the US President is finally on TikTok.

    So why the sudden change? As expected, with the election just months away, the US President is hoping to reach out to younger voters. These voters represent a huge demographic that still uses TikTok and relies heavily on the social media app for all its information.

    According to a report by TIME, The account, @Bidenhq, will be run by Biden’s campaign staff alongside other accounts on X, Threads, Facebook, and Truth Social, according to campaign advisers. The President’s TikTok account was unveiled during the SuperBowl over the weekend and already has more than 65,000 followers as of Monday. Biden’s first video on the app also has more than half a million viewers.

    “The campaign will continue meeting voters where they are, innovating to create content that will resonate with critical audiences and the core constituencies that make up the President’s diverse and broad coalition of voters,” advisers said. Biden campaign advisers added that they “are taking advanced safety precautions around our devices and incorporating a sophisticated security protocol to ensure security.”

    Will the Administration now stop its TikTok persecution?

    So why the hypocrisy?

    Meanwhile, White House spokesperson John Kirby said nothing has changed about TikTok use from a national security perspective. Some users feel Biden’s move to join TikTok is a sign of how badly Democrats want to court young Americans ahead of the election.

    “It’s shameful that Biden is embracing TikTok to compensate for bad polls driven by his mental decline,” Senator Tom Cotton posted on X. He also called TikTok a “spy app for the Chinese Communist Party.” Senator Josh Hawley also posted on X, apparently having never heard of irony, saying, “Joe Biden is so desperate to do anything to help his sad reelection bid he’s willing to use a Chinese spy app his own government has outlawed.”

    While it remains to be seen how much of an impact Biden’s TikTok account will have on the elections, one thing is for certain: TikTok will most likely not be going anywhere. For now, the social media app can be relieved with this “small big” win.

    The post After all the drama about security and privacy, Biden joins TikTok  appeared first on Tech Wire Asia.

    ]]>
    Data Privacy Week: the role of tech companies  https://techwireasia.com/01/2024/data-privacy-week-the-role-of-tech-companies/ Mon, 29 Jan 2024 01:15:31 +0000 https://techwireasia.com/?p=237454 Data privacy is becoming more important and challenging in the digital age. Organizations should reconsider the measures they have in place and develop a secure and well-governed data foundation. Data privacy is not only a legal obligation but also a business opportunity. It’s Data Privacy Week. As usual, tech companies will take the opportunity to... Read more »

    The post Data Privacy Week: the role of tech companies  appeared first on Tech Wire Asia.

    ]]>
  • Data privacy is becoming more important and challenging in the digital age.
  • Organizations should reconsider the measures they have in place and develop a secure and well-governed data foundation.
  • Data privacy is not only a legal obligation but also a business opportunity.
  • It’s Data Privacy Week. As usual, tech companies will take the opportunity to reach out to businesses to highlight the importance of data protection and the consequences of not taking it seriously.

    Indeed, tech companies play a big role when it comes to data privacy today. Simply because businesses rely on these companies to help them protect their data. While there are different concepts towards data privacy and protection in an organization, the main idea goal is to ensure data privacy is secured.

    Today, data privacy is becoming more important and challenging in the digital age, especially with AI becoming a game-changer in the industry. As such, tech companies can help businesses with data privacy in several ways, such as:

    • Implementing strong security measures to prevent data breaches, such as encryption, firewalls, authentication, and access control.
    • Adopting privacy by design principles means integrating privacy considerations into every stage of the development and operation of a system.
    • Using emerging technologies that enable data analysis without compromising personal information, such as differential privacy, federated learning, and homomorphic encryption.
    • Educating and empowering customers about their data rights and choices, such as providing clear and transparent privacy policies, consent forms, and opt-out options.
    • Complying with relevant data protection laws and regulations, such as the General Data Protection Regulation (GDPR) in the European Union, the California Consumer Privacy Act (CCPA) in the United States, and the Personal Data Protection Act (PDPA) in Malaysia.

    By following these best practices, tech companies can not only protect their customers’ data but also gain a competitive advantage and enhance their reputation in the market. At the end of the day, data privacy is not only a legal obligation but also a business opportunity.

    In conjunction with Data Privacy Week, several tech executives share their views on data privacy with Tech Wire Asia.

    Data privacy has to be a prerogative for everyone.

    Data privacy has to be a prerogative for everyone.

    James Fisher, chief strategy officer, Qlik

    We are squarely in the middle of an AI boom, with generative AI promising to take us into a new era of productivity and prosperity. However, despite its vast potential, there remains a lot of trepidation around the technology – particularly around how to use it responsibly. For example, there are risks around the violation of data privacy and individual consent when it comes to the data that AI algorithms are trained on.

    Trust in generative AI – and the data powering it – is key for the technology to be embraced by enterprises. With the risk of misinformation, the use of deepfakes and more, it will take hard work to build this trust. One way to do this is through improving the data that AI is fed – because AI is only as good as its data.

    We are seeing steps in the right direction here through a push for better governance, origin, and lineage of data to power AI. At an enterprise level, businesses must look to test the validity of their data and get robust data governance in place. Then, it will be possible to use AI to generate more trustworthy and actionable insights down the line.

    Trevor Schulze, chief information officer at Alteryx

    Powerful enterprise use cases for generative AI are still being discovered, but so too are limitations in terms of data privacy and regulation. The Information Commissioner’s Office‘s recent review of how data protection laws should apply to such generative AI applications was a key reminder of this. The EU AI Act, which is the first official AI regulation and requires AI systems deployed in the EU to be safe, transparent, traceable, non-discriminatory and environmentally friendly, is another good example of regulators moving at pace to react to AI concerns.

    Data privacy is cited by 47% of data leaders as the reason why AI capabilities have not yet been deployed within their organizations. Even for organizations that are starting to use new AI capabilities through commercial or custom-built LLMs, understanding what data was used to train these models is proving very difficult.  

    Clear data governance policies will be critical in building overall confidence in AI moving forward. Creating or reinforcing data steward roles within enterprises to advocate for secure AI use by creating, carrying out and enforcing data usage rules and regulations will display a commitment to data privacy and build company-wide confidence.

    At the end of the day, data privacy is not only a legal obligation but also a business opportunity.

    At the end of the day, data privacy is not only a legal obligation but also a business opportunity. (Source -Shutterstock).

    Niels van Ingen, chief customer officer for Keepit

    No one likes surprises, particularly IT executives who believe their SaaS cloud providers have taken all the necessary steps to back up customers’ critical enterprise data. This is never truer when a disaster strikes, whether from an internal mistake or an attack from the outside, leaving business operations at a complete standstill.

    The unfortunate truth is that most SaaS providers don’t offer the necessary level of data backup and recovery that enterprises require to get back up and running.

    And guess what? If you read the cloud agreement, you’ll discover SaaS vendors aren’t responsible for data backup. The onus is on you.

    It’s easy for individuals and businesses using popular cloud-based services to believe their data is “backed up in the cloud” and easily retrievable in the event of an attack or accidental deletion. However, they quickly learn – often too late – that backup services from SaaS vendors are usually very limited, disorganized, or prohibitively expensive to access. Organizations also get surprised when learning that many SaaS providers offer a limited data retention period, where after such time, the data is permanently deleted.

    That’s why the only true backup – and the last line of defense in SaaS data protection – is having granular, reliable, and fast backup and recovery capabilities, with the data stored separately from the SaaS vendor’s environment.

    Sanjay Deshmukh, senior regional vice president for ASEAN and India, Snowflake.

    This Data Privacy Day, organizations should reconsider the measures they have in place and develop a secure and well-governed data foundation to ensure the privacy of their customer data. Snowflake recommends a 3-step approach to build this secure and well-governed data foundation:

    1. Unify: Break down the silos and organize the data in 1 place to get a complete view,  
    2. Understand: Identify what information is in your data and especially the sensitive information and
    3. Secure: Secure data with various governance and privacy protection features like data masking, tokenization, Role-based security access, Row level security.

    This secure and well-governed data foundation will enable organizations to protect their customers’ data and accelerate innovation and transformation by enabling AI and Large Language Models against this secured data.

    This Data Privacy Day, organizations should reconsider the measures they have in place and develop a secure and well-governed data foundation to ensure the privacy of their customer data.

    This Data Privacy Day, organizations should reconsider the measures they have in place and develop a secure and well-governed data foundation to ensure the privacy of their customer data. (Source -Shutterstock).

    Remus Lim, vice president, Asia Pacific and Japan, Cloudera

    Generative AI has hogged headlines in 2023 as organizations scrambled to adopt the technology in the enterprise. Chatbots, automated report generation and personalized emails are all examples of how generative AI can drive creativity and productivity while improving customer experience. It is, however, crucial to note that all AI/ML models are only as good as the data that they are trained on.

    As companies look to deploy more AI and ML technologies across the business, there is an increasing demand for access to their data across all environments. Advancements in AI/ML have even let organizations extract value from unstructured data, which makes the management, governance, and control of all data critical.

    With businesses looking to democratize more of their data, it is key to focus on data privacy and security. They must build their strategies and plans with data security and governance at the forefront as tackling third-party security solutions is often a difficult and expensive process. Investing in modern data platforms and tools with built-in security and governance capabilities allows companies to democratize their data in a secure and governed manner, while successfully training enterprise AI/ML models.

    In fact, DataOps, which is the approach to improving the communication, integration and automation of data flows between employees who work with data and the rest of the organization, is expected to hit US$10.9 billion by 2028 as businesses strive to make more data-driven decisions by increasing employees’s access to data.

    David Sajoto, vice president of Vectra AI, Asia Pacific & Japan

    The cybersecurity market in the APAC region is billions of dollars strong and growing, thanks to a continually evolving and destructive threat landscape. This underscores the crucial responsibility organizations have in safeguarding sensitive information and serves as a reminder of the challenges involved in maintaining data privacy. However, with the right awareness, training and security measures in place, not to mention advanced attack signal intelligence powered by AI, businesses can reclaim the certainty that their data is protected.

    Last year saw privacy laws grow significantly in 2023, by as much as 25% from 2021. Countries throughout the region either adopted or are in the process of implementing comprehensive privacy laws for the first time, with more mature markets aligning laws closely with the likes of Europe to strengthen defense measures. Breach notification requirements, data privacy offers and data localization requirements are increasingly becoming the norm.

    Despite these efforts, unwelcome breaches made headlines. Throughout APAC, we heard stories of customer information leaks, citizen details being sold online, patient data being stolen and abused, services being corrupted and taken down, and ransomware attacks threatening protected data to be released unless hefty sums were paid out.

    As we strive to make the world a safer and fairer place, companies have a responsibility to their customers, partners and end users to implement the right practices that will ensure their privacy and data are protected.

    That brings us to 2024. Data Privacy Day presents an excellent opportunity to have difficult and important conversations about how we can better protect our people – both staff and customers – and the company name. Organizations must invest in the right solutions that are geared towards both prevention and detection, leveraging advanced technologies such as AI to extend attack signal intelligence and support security teams.

    As attacks become more intelligent and powered by the likes of generative AI, businesses will face heightened expectations to demonstrate their commitment to implementing comprehensive measures aimed at safeguarding data. Technology decision-makers must reflect on what has worked and what hasn’t, adopting an approach that effectively stops and catches attacks before it’s too late.

    Data privacy is becoming more important and challenging in the digital age, especially with AI being a game-changer in the industry.

    Data privacy is becoming more important and challenging in the digital age, especially with AI being a game-changer in the industry. (Source -Shutterstock).

    Brian Spanswick, CISO and head of IT, Cohesity 

    World Data Privacy Day is an excellent opportunity for public and private organizations to assess the effectiveness of their data security and management practices and this has never been more critical that it is now.

    The accelerated adoption of LLM like ChatGPT, has added a significant threat to an already critical data security posture. The value of the data itself and the critically of that data for an organization’s operations to function has never more exposed to disruption and exfiltration. This increases the need for organizations to understand where their data are, ensure that data is encrypted in transit and at rest, and have the ability to recover that data minimizing disruption to operations.

    Unfortunately, we live in world where cyberthreats and successful attacks are a challenge that all organizations face because of the disruptive impact they have on business continuity and the lucrative financial gains they can provide to threat actors when they can exfiltrate an organization’s data.

    By using this annual event as a catalyst to releases, evaluate and revise your data security and management best practices you’ll help set your organization up for success in the year ahead. Adopting modern technology platforms that help you protect, secure and recover data is both fundamental and critical.

    In 2024, there are many solutions that are integrating AI to help turbocharge organizations IT and Security capabilities increasing the effectiveness of these fundamental data protection controls.

    The post Data Privacy Week: the role of tech companies  appeared first on Tech Wire Asia.

    ]]>
    Data privacy week: a collective responsibility  https://techwireasia.com/01/2024/data-privacy-week-a-collective-responsibility/ Wed, 24 Jan 2024 00:15:58 +0000 https://techwireasia.com/?p=237378 Data Privacy Week calls for everyone to take control of their data. Employees can contribute towards data privacy by following some best practices. AI will have a strong role to play in data management. The theme for this year’s Data Privacy Week is Take Control of Your Data. Throughout the week, the National Cybersecurity Alliance... Read more »

    The post Data privacy week: a collective responsibility  appeared first on Tech Wire Asia.

    ]]>
  • Data Privacy Week calls for everyone to take control of their data.
  • Employees can contribute towards data privacy by following some best practices.
  • AI will have a strong role to play in data management.
  • The theme for this year’s Data Privacy Week is Take Control of Your Data. Throughout the week, the National Cybersecurity Alliance (NCA) in the US will emphasize the critical significance of digital privacy for both consumers and businesses through a series of educational webinars featuring experts from various industries.

    The week builds on the success of Data Privacy Day, which began in the United States and Canada in January 2008 as an extension of Data Protection Day in Europe. Data Protection Day commemorates the January 28, 1981 signing of Convention 108 – the first legally binding international treaty dealing with privacy and data protection.

    “Knowing how to safeguard your personal information has never been more important than it is today. Between social media, mobile apps, internet-connected devices and the rise of artificial intelligence, vast amounts of personal data are being gathered constantly, putting individuals’ privacy at risk,” said Lisa Plaggemier, executive director at NCA.

    “As innovation continues to outpace regulation, individuals and businesses alike need to make concerted efforts to educate themselves and take a proactive role in preserving the privacy of sensitive data. Through Data Privacy Week we hope to inspire better data stewardship and empower people to reclaim control of their digital footprints, balancing innovation with privacy.”

    Today, there is no denying that data privacy is a crucial issue for both individuals and organizations in the digital age. While many feel that businesses should be responsible for their company data, be it their customers or employees, everyone actually has a role to play in ensuring data is not compromised.

    Today, there is no denying that data privacy is a crucial issue for both individuals and organizations in the digital age.

    Today, there is no denying that data privacy is a crucial issue for both individuals and organizations in the digital age.

    As such, this year’s theme calls on everyone to be in control of their data. After all, employees are often considered the weakest link in cybersecurity for any organization. Most data breaches also occur because of employees who were careless in managing their credentials.

    Achieving data control

    For employees, there are several ways they can achieve data control. But more importantly, it is also their responsibility to ensure their business and customer data is not compromised. Employees can contribute towards data privacy by following some best practices, such as:

    • Educating themselves on the importance of protecting personal and sensitive information, and the potential risks associated with online activities.
    • Securing their devices with strong passwords, encryption, antivirus software, and firewalls.
    • Implementing two-factor authentication (2FA) for accessing online accounts and services.
    • Being wary of phishing attempts and other malicious emails that may try to steal their credentials or data.
    • Minding their digital footprint and limiting the amount of personal information they share on social media and other platforms.
    • Respecting the privacy rights of others and only accessing or using data they are authorized to.
    • Following the security policies and guidelines established by their organization, and reporting any privacy concerns or incidents to the appropriate person or team.
    • Participating in regular training sessions, simulated phishing exercises, and privacy assessments to enhance their awareness and skills.
    • Rewarding and recognizing security-conscious behavior among their peers, and creating a culture of compliance and trust.

    These are some of the ways that employees can own online privacy and help their organization safeguard data and maintain trust. But then again, ensuring this still remains a major problem for businesses. For example, insider threats are often caused by unhappy employees or those who are willing to compromise company data.

    Data privacy week: a week for putting the responsibility of data security on employees.

    Employees can contribute towards data privacy by following some best practices. (Image generated by AI).

    “The continual evolution of regulations and frameworks, coupled with the increasing value of data and widespread integration of data-driven technologies, necessitates a proactive stance towards identity security. We urge organizations to prioritize robust identity security controls and hygiene practices. Enhancing employee training, using automation, and adopting zero trust solutions are essential to this proactive approach. By doing so, they can mitigate risks, protect customer trust, and thrive in a world where data is the new currency,” commented Lim Teck Wee, area vice president for ASEAN at CyberArk.

    Employers can ensure employees adhere to data privacy guidelines by implementing some of the following strategies:

    • Establish clear expectations and policies for data protection and compliance, and communicate them to all employees.
    • Provide training and development opportunities for employees to enhance their awareness and skills on data privacy best practices.
    • Reinforce data privacy compliance consistently and lead by example, by rewarding and recognizing security-conscious behavior and addressing any violations or incidents promptly.
    • Maintain open communication and transparency with employees about the purpose and scope of data collection and processing, and respect their privacy rights.
    • Provide adequate resources and tools for employees to secure their devices, accounts, and data, such as encryption, antivirus software, firewalls, and two-factor authentication.
    • Develop a strategy towards monitoring that puts employee privacy first, and keeps tracking to the minimum amount needed to meet the intended purpose.
    • Consider creating an employee privacy policy (or a privacy notice) to inform your employees about your monitoring strategy and how you handle their personal data.
    • Update and review privacy policies and practices regularly to ensure they are aligned with the latest laws and regulations.

    These are some of the ways that employers can foster a culture of compliance and trust among their employees and protect their data privacy.

    New laws putting guardrails on using personal data in the large language models (LLMs) behind generative AI tools are gaining steam, making data compliance more complex.

    New laws putting guardrails on using personal data in the large language models (LLMs) behind generative AI tools are gaining steam, making data compliance more complex. (Image generated by AI).

    Data Privacy Week: employee governance and compliance

    According to Ajay Bhatia, global VP & GM of data compliance and governance at Veritas Technologies, businesses need to first realize that data privacy isn’t something that can be achieved in a single day.

    “Data privacy is a continual process that requires vigilance 24/7/365. Top of mind this year is the impact AI is having on data privacy. AI-powered data management can help improve data privacy and associated regulatory compliance, yet bad actors are using generative AI to create more sophisticated attacks. Generative AI is also making employees more efficient, but it needs guardrails to help prevent accidentally leaking sensitive information. Considering these and other developments, data privacy in 2024 is more important than ever.”

    When it comes to compliance, Bhatia stated that new laws putting guardrails on using personal data in the large language models (LLMs) behind generative AI tools are gaining steam, making data compliance more complex. For example, the California Privacy Protection Agency is already working to update the California Consumer Privacy Act (CCPA) to address generative AI and privacy, including opt-out implications. Bhatia believes this type of legislation, like most other privacy regulations, will differ across continental, country and state borders, making the already complex regulatory environment even harder to navigate without help.

    “Whether to implement generative AI isn’t really a question. The value it provides employees to streamline their jobs means it’s almost a foregone conclusion. But that must be balanced with the risks generative AI could pose when proprietary or other potentially sensitive information is fed into these systems.

    To ensure they remain compliant with data privacy standards, whether or not regulatory bodies enact AI-specific rules, IT leaders need to provide guardrails to employees that will limit the likelihood that they accidentally expose something they shouldn’t,” explained Bhatia.

    At the same time, Bhatia also pointed out that AI is making the cyberthreat landscape more dangerous. Cybercriminals are already using AI to improve their ransomware capabilities and launch more sophisticated attacks that threaten data privacy. There’s always been a technological war between defenders and attackers, but now that war is moving into AI-assisted cyber-combat.

    “It’s only going to get harder to defend against these threats without AI-powered resilience that counteracts the evolving landscape of AI-powered attacks,” he concluded.

    The post Data privacy week: a collective responsibility  appeared first on Tech Wire Asia.

    ]]>
    Data Privacy Week: the role of AI in privacy  https://techwireasia.com/01/2024/data-privacy-week-the-role-of-ai-in-privacy/ Tue, 23 Jan 2024 01:30:19 +0000 https://techwireasia.com/?p=237354 Data privacy is a complex and evolving issue that affects individuals, organizations, and society as a whole. Data Privacy Day is an international event that occurs every year on 28th January. While AI can enhance data privacy, it also comes with challenges and risks Data Privacy Day is an international event that occurs every year... Read more »

    The post Data Privacy Week: the role of AI in privacy  appeared first on Tech Wire Asia.

    ]]>
  • Data privacy is a complex and evolving issue that affects individuals, organizations, and society as a whole.
  • Data Privacy Day is an international event that occurs every year on 28th January.
  • While AI can enhance data privacy, it also comes with challenges and risks
  • Data Privacy Day is an international event that occurs every year on 28th January. This year, Data Privacy Week takes place from the 21 to the 27th of January.  The purpose of Data Privacy Week is to raise awareness and promote privacy and data protection best practices.

    Over the years, data privacy has seen significant growth, especially with all businesses and governments prioritizing its importance. Data privacy laws also continue to evolve, especially with emerging technologies and the surge of data being generated around the world.

    The EU’s GDPR remains the most powerful law in the world when it comes to data privacy. Governments around the world today implement new privacy laws and regulations based on the standards set by the GDPR. However, with the rise of generative AI use cases around the world, there are now calls for stronger data privacy laws to address them.

    Once again, the EU is hoping to lead the regulations for AI. Given that AI will not only impact data privacy, the law will also need to cover the protection of intellectual property as well as ensure the technology is not used for the wrong reasons.

    According to Arun Kumar, regional director at ManageEngine, safeguarding a company’s data is vital because it protects the organization from financial loss, reputational damage, and loss of intellectual property. Successful data protection requires a holistic approach where people, processes, and the technology framework are the focus.

    Data privacy laws also continue to evolve, especially with emerging technologies and the surge of data being generated around the world.

    Data privacy laws also continue to evolve, especially with emerging technologies and the surge of data being generated around the world.

    The challenges to data privacy

    Data privacy is a complex and evolving issue that affects individuals, organizations, and society as a whole. Some of the biggest problems of data privacy today are:

    • Data breaches: Data breaches are unauthorized access or disclosure of personal or sensitive data that can compromise the security, integrity, and confidentiality of the data. Data breaches can result from cyberattacks, human errors, system failures, or malicious insiders. Data breaches can cause financial losses, reputational damage, legal liabilities, and emotional distress for the victims. In 2023, the average cost of a data breach was US$4.45 million according to a report by IBM.
    • Data localization: Data localization is the requirement or preference to store or process data within a specific country or region. Data localization can be motivated by political, economic, or legal concerns. Data localization can pose challenges for global businesses that operate across multiple jurisdictions and have to comply with different privacy regulations and standards.
    • Privacy-enhancing computation: Privacy-enhancing computation (PEC) is a technique that protects data in use from being exposed or analyzed by unauthorized parties. PEC uses cryptographic methods to ensure that only authorized parties can access the data while it is being processed. PEC enables new applications and services that require data processing in untrusted environments, such as public cloud or multiparty data sharing.
    • Facial recognition: Facial recognition is a technology that identifies or verifies a person’s identity based on their facial features. Facial recognition can be used for various purposes, such as security, authentication, surveillance, or entertainment. Facial recognition raises privacy concerns because it can collect and store biometric data without the consent or knowledge of the users. Facial recognition can also be inaccurate, biased, or manipulated.

    These are just some of the major concerns about data privacy. Other concerns include the use of data by businesses to understand consumer behavior, often without users’ explicit understanding at the point of use. Last year, Meta was fined a record US$1.3 billion and ordered to stop sending European user data to the US. Google has also been fined by several European regulators on data privacy issues.

    “Organizations that implement alerts within their systems can enhance awareness about security incidents. Solutions like security information and event management (SIEM) are critical for enterprises to proactively identify, manage, and neutralize security threats using AI and automation. Education is also key; every single employee should share the responsibility of safeguarding their company’s data by implementing and adhering to data protection policies and processes,” added Kumar.

    Data privacy is a complex and evolving issue that affects individuals, organizations, and society as a whole.

    Data privacy is a complex and evolving issue that affects individuals, organizations, and society as a whole.
    (Image generated by AI).

    The role of AI

    There are concerns about the impact of data privacy with AI. This is because, for an AI use case to work best, it needs to train on data. And the data on which it trains can include personal data.

    For example, when using AI to generate images, the model needs to do it by analyzing images on the web. It then generates a new image. This has led to problems like deepfake images, where real images and footage of people are easily manipulated and transformed into pornographic content.

    Despite these concerns, some tech companies are also using AI to boost data privacy in various ways. These include:

    • Privacy Concierge: AI systems work as a “privacy concierge” for the network infrastructure to identify, redirect, and process privacy data requests much faster when compared to doing it manually.
    • Data Classification: AI is seen to be highly efficient in classifying data and managing it in an organized way. This can help reduce the risk of data breaches and leaks by applying appropriate security measures to different types of data.
    • Sensitive Data Management: AI can help protect sensitive data from unauthorized access or misuse by using techniques such as encryption, anonymization, or differential privacy. These techniques can ensure that the data remains confidential while still allowing for its analysis or sharing.
    • Data Security: AI can also help improve the security of data by detecting and preventing potential cyberattacks, such as malware, phishing, or ransomware. AI can also monitor and audit the activities of users and systems to identify any anomalies or violations of data policies.
    While it can enhance data privacy, it also comes with challenges and risks.

    While AI can enhance data privacy, it also comes with challenges and risks. (Image generated by AI).

    It is important to note that AI is a powerful tool. While it can enhance data privacy, it also comes with challenges and risks. Therefore, it is important to develop and implement ethical and responsible AI practices that respect the rights and interests of data subjects and stakeholders. Some of these practices include:

    • Transparency: AI systems should be transparent about their purpose, functionality, limitations, and outcomes. Users should be able to understand how their data is collected, processed, stored, shared, and used by AI systems.
    • Accountability: AI systems should be accountable for their actions and decisions. Users should be able to hold AI systems responsible for any harm or damage they cause to individuals or society.
    • Fairness: AI systems should be fair and unbiased in their treatment of different groups of people. Users should be able to challenge any discrimination or injustice caused by AI systems.
    • Privacy by Design: AI systems should be designed with privacy in mind from the outset. Users should have control over their own data and how it is used by AI systems.

    “Overall, data privacy is an essential part of any business, not only because it helps them comply with data privacy regulations, but also because it builds trust and protects the valuable data of customers,” Kumar concluded.

     

    The post Data Privacy Week: the role of AI in privacy  appeared first on Tech Wire Asia.

    ]]>
    Debating CAPTCHAs in 2024: stick with solving them or ditch the bot blockers? https://techwireasia.com/01/2024/should-we-keep-solving-captchas-in-2024-or-leave-them-behind-in-the-past/ Mon, 08 Jan 2024 00:30:59 +0000 https://techwireasia.com/?p=236833 The necessity of solving CAPTCHAs clashes with user frustrations and evolving privacy and security concerns. CAPTCHAs in 2024 stir debate over their efficiency vs. user inconvenience, with emerging features for smoother browsing. CAPTCHAs face user pushback and advanced cybercriminal bypass methods, highlighting the need for innovative solutions. Alright, let’s start a debate. Solving CAPTCHAs –... Read more »

    The post Debating CAPTCHAs in 2024: stick with solving them or ditch the bot blockers? appeared first on Tech Wire Asia.

    ]]>
  • The necessity of solving CAPTCHAs clashes with user frustrations and evolving privacy and security concerns.
  • CAPTCHAs in 2024 stir debate over their efficiency vs. user inconvenience, with emerging features for smoother browsing.
  • CAPTCHAs face user pushback and advanced cybercriminal bypass methods, highlighting the need for innovative solutions.
  • Alright, let’s start a debate. Solving CAPTCHAs – yes or no?

    It’s 2024; should we keep solving CAPTCHAs or should we leave them behind in the past? Sure, they serve a purpose on the internet – blocking bots from activities like account creation, comment spamming, and bulk purchasing. Yet, they’re also incredibly irritating. There are times I question if I’m a bot myself. Do android writers recognize their android nature?

    Every time a CAPTCHA challenge appears, I find myself scrutinizing a grid of nine images, trying to identify traffic lights, crosswalks, or bicycles. It’s frustrating, especially when I miss one tiny, ambiguous section. And then there are those times when you’re left guessing if a fragment of a car should count or not. More recently, I’ve been tasked with rotating a 3-D rat to align with a puzzle direction, which sounds simple but is actually surprisingly tricky due to poor image quality and difficulty distinguishing the rat’s head from its tail.

    Now, let’s address this. ReCAPTCHAs and other traditional CAPTCHAs have been effectively safeguarding online content and revenue for years. But the public opinion is clear: they’re widely disliked.

    Accessibility and user experience issues to solving CAPTCHAs

    CAPTCHAs aren’t just annoying and disruptive to the user journey; they also pose serious accessibility challenges. For individuals with dyslexia, visual impairments, or sensory disabilities, text- and image-based puzzles can be downright impossible to solve.

    A challenge of CAPTCHAs for those with visual challenges.

    A challenge of CAPTCHAs. (Source – X).

    The version of reCAPTCHA that simply asks users to confirm they’re not robots was somewhat better for user experience. However, those using screen readers still struggled with them. Often, detection failures led to a secondary verification step involving image recognition, which compounded the issue.

    A lot of websites still operate with reCAPTCHA v2. Under this system, if a user’s behavior appears suspicious, they’re presented with a challenge to prove their humanity. This could be as simple as ticking a box stating, “I’m not a robot,” or it might involve a more complex image or audio recognition task. The extent of the challenge depends on Google’s confidence in the user’s humanity.

    ReCAPTCHA v2’s effectiveness is based mainly on Google’s “advanced risk analysis system,” which relies heavily on Google cookies. Chrome users or those logged into Google accounts usually face a simple checkbox. Conversely, Firefox users with disabled third-party cookies often encounter challenging image recognition tasks.

    Not everyone is a Chrome user or comfortable with Google services, often due to privacy concerns. As a result, privacy-minded individuals using browsers like Firefox or Brave, or even VPNs, face more stringent challenges from reCAPTCHA v2. This not only degrades their experience but also affects website conversion rates.

    Cybercriminals outsmarting CAPTCHAs

    What’s more, the widespread use of reCAPTCHA v2 has led cybercriminals to develop sophisticated methods to bypass even its most complex challenges. Some bots now use advanced AI, trained with neural networks, to solve reCAPTCHAs automatically.

    Ironically, while Google uses reCAPTCHA to improve its AI models for image and audio recognition, cybercriminals are exploiting these AI advancements to defeat reCAPTCHA. It’s a digital life cycle!

    Criminals also employ CAPTCHA farms in low-cost countries, where human workers solve reCAPTCHA challenges for bots. This approach enables bots that don’t require JavaScript execution. To bypass reCAPTCHA v2, these bots simply need to submit a response token from the CAPTCHA farm.

    This method allows attackers to use simpler HTTP request libraries rather than complex automated browsers, reducing operational costs and enabling faster page crawling or more efficient credential stuffing.

    In response to user grievances, Google introduced reCAPTCHA v3 to enhance the user experience. This version is invisible to site visitors and doesn’t require solving challenges. Instead, it continually assesses visitors’ behavior to determine their human or bot status.

    Currently, reCAPTCHA v3 is active on over 1.2 million websites, compared to the 10 million-plus sites using v2. It assigns a score between 0 and 1 to each user request, gauging the likelihood of it coming from a bot or a human. Users logged into Google accounts or using Chrome generally score higher. Website admins can refine these scores by specifying user actions that align with typical behavior in various contexts.

    Unlike reCAPTCHA v2, where user response verification was sufficient, v3 requires admins to decide based on user scores. This complexity poses a significant challenge, even for seasoned webmasters. Although reCAPTCHA v3 improves user experience by eliminating interruptions, it raises privacy issues and adds administrative complexity.

    Future of CAPTCHAs: auto-verify features in browsers

    Whether it’s v2 or v3, people seem to have a universal aversion to solving picture puzzles. Mashable SEA reported that in May, a user @Leopeva64 noticed Google Chrome testing an “auto-verify” feature on the desktop. This feature lets sites recognize users who have previously solved a CAPTCHA, letting them proceed without facing another puzzle.

    Can we expect auto-verify to come anytime soon?

    Can we expect auto-verify to come anytime soon? (Source – X).

    Recently, @Leopeva64 discovered that Microsoft Edge is testing a similar feature in its Android app, allowing websites to recognize users through previously solved CAPTCHAs, confirming their human status without requiring additional puzzle solutions.

    The exact timeline for these auto-verify features to be integrated into the public releases of Chrome and Edge remains uncertain. However, what is evident is the growing momentum towards eliminating the constant need for CAPTCHA verifications. This shift reflects a growing preference for smoother, less intrusive user experiences on the web while maintaining security standards.

    As the digital landscape evolves, integrating such features in mainstream browsers like Chrome and Edge signals a potential end to the era of repetitive CAPTCHA challenges. This change could mark a significant step forward in making web browsing more seamless and user-friendly, especially for those who have been burdened by the often cumbersome task of proving their humanity to machines.

    The post Debating CAPTCHAs in 2024: stick with solving them or ditch the bot blockers? appeared first on Tech Wire Asia.

    ]]>
    Uncovering the biggest data collectors: which companies are at the top? https://techwireasia.com/12/2023/which-companies-collect-the-most-data-from-users/ Tue, 26 Dec 2023 01:55:39 +0000 https://techwireasia.com/?p=236640 Companies, including Facebook and Instagram, heavily use and track user data. Surfshark uncovers extensive data tracking by shopping and food delivery apps, with Wish and DoorDash as notable examples. Surfshark’s analysis shows major apps like Facebook and Wish extensively link and track user data, raising privacy alarms. Data has become the new currency – but... Read more »

    The post Uncovering the biggest data collectors: which companies are at the top? appeared first on Tech Wire Asia.

    ]]>
  • Companies, including Facebook and Instagram, heavily use and track user data.
  • Surfshark uncovers extensive data tracking by shopping and food delivery apps, with Wish and DoorDash as notable examples.
  • Surfshark’s analysis shows major apps like Facebook and Wish extensively link and track user data, raising privacy alarms.
  • Data has become the new currency – but we know this already, and major companies worldwide are capitalizing on this trend. From social media giants to e-commerce platforms, businesses are collecting, analyzing, and using customer data in ways that are often invisible to the average user. This data collection serves various purposes, from improving user experience and personalizing services to more contentious practices like targeted advertising and selling information to third parties. While data-driven strategies can enhance business operations and customer satisfaction, they also raise significant privacy concerns.

    Surfshark’s eye-opening data privacy study

    This issue was brought to light in a recent study conducted by Surfshark’s Research Hub, which investigated the data privacy practices of 100 popular apps. The study reveals a worrying pattern in handling customer data, especially in shopping and food delivery services. Notably, apps such as Amazon Shopping and Wish are identified as major culprits in extensive user data collection. The findings indicate that over one-third of the data collected by shopping and food delivery apps is tracked and potentially shared with third-party advertising networks or data brokers. This kind of data handling poses a substantial threat to user privacy.

    Surfshark’s research is further enhanced by a free app privacy checker tool. This tool lets users take control of their digital footprint by allowing them to select specific apps on their phones and receive detailed reports on the extent of data collection. Such tools are essential in an era where data privacy concerns are increasingly coming to the forefront of public consciousness, urging consumers and companies to rethink the balance between data utility and privacy.

    Agneska Sablovskaja, lead researcher at Surfshark, noted that on analyzing 100 popular apps on the App Store, the company found a worrying trend where nearly 20% of collected data was used for tracking.

    “Such tracked data can be shared with third-party advertisers or data brokers, who use it to deliver personalized ads targeting the users, or help companies with their market research,” said Sablovskaja. “Understanding an app’s privacy policy is crucial for safeguarding digital autonomy.”

    The study by Surfshark reveals a significant trend in data collection among shopping and food delivery apps. These apps gather 21 out of 32 possible data points, higher than the average of 15 across all 100 apps examined. Notably, these apps are more likely to link collected data to the user’s identity, with 95% of the data points associated directly with users. Furthermore, a considerable portion of this data, approximately one-third, is used for tracking purposes.

    Data points your apps - and the companies you use - have collected.

    Data points your apps have collected. (Source – Surfshark).

    Wish is the most data-intensive app in the shopping and food delivery category. It collects an impressive 24 of the 32 data points, linking almost all of them to the user’s identity. Over a third of this data is used to track users, a figure significantly higher than the average. This includes email addresses, precise locations, and purchase histories. Similarly, DoorDash is notable for using around 40% of its collected data points for tracking purposes.

    In contrast, Amazon is unique among the analyzed shopping and food delivery apps because it does not use the data it collects to track users. But it still gathers a substantial amount of user data, collecting 25 of the 32 possible data points, all linked to the user’s identity.

    Focusing specifically on food delivery apps, Uber Eats is prominent for tracking the most data points (12 out of 21 collected), including sensitive information like phone numbers, physical addresses, and search histories. GrubHub and Instacart also track many data points, with 11 and 10, respectively.

    The ten shopping and food delivery apps analyzed in the study include Amazon Shopping, eBay Marketplace, AliExpress, Etsy, Wish, DoorDash, Uber Eats, Grubhub, Deliveroo, and Instacart. These findings highlight this sector’s extensive data collection practices, emphasizing the need for increased awareness and possibly regulation regarding user data privacy.

    The top 10 least privacy-sensitive apps

    Surfshark’s recent investigation revealed several apps with notably poor privacy practices. To evaluate these apps, the company developed a unique system to assess each app’s data collection and usage level.

    The evaluation system ranked apps based on the collected data points, with the maximum possible score of 32. In cases where apps collected equal data points, they were further ranked based on how much they used this data to track users.

    Facebook and Instagram emerged as the most privacy-invasive apps in this study. Both apps, being Meta Platforms, Inc. products, collected all 32 data points defined by Apple, a distinction they alone held. Furthermore, all collected data is linked to the user, with 7 out of these 32 data points being employed for user tracking. This includes sensitive information like names, email addresses, phone numbers, and physical addresses.

    Additionally, three other apps – Wish, DoorDash, and TikTok – were noted for using data points linked to users for tracking purposes. Each app collects 24 data points, significantly higher than the average of 15 found across all apps analyzed in the study.

    For Wish and DoorDash, around 40% of the collected data points, including email address, precise location, and purchase history, are used for tracking. Similarly, TikTok uses three of its collected data points – the user’s email address, phone number, and device ID – for tracking purposes. This data collection and tracking level raises serious privacy concerns, highlighting the need for greater transparency and user control in data handling by these apps.

    How companies handle the collected data

    Surfshark’s analysis encompassed 100 apps, spanning ten different categories. These apps were selected based on their prominence in search engine results for queries like “the most popular [app category] apps.” The App Store identifies 32 unique data points under 12 distinct categories. Surfshark’s study focused on three main aspects of data collection: the types of unique data points gathered, the amount of data linked to users, and the data utilized for tracking purposes.

    Collected data used for tracking.

    Collected data used for tracking. (Source – Surfshark).

    The study explores the extent to which collected data is associated with individual users and how it’s used for tracking across various services. Apple outlines three potential treatments for collected data:

    • Use for tracking: this involves associating data from the app related to a specific user or device (like user IDs or device IDs) with data from external sources, such as third-party advertising networks. The primary use of tracking is in targeted advertising and measuring the effectiveness of these ads.
    • Linking to users: in this context, linking refers to associating collected data with the user’s identity. Apple notes that data from an app is usually linked to the user’s identity, unless specific measures are taken beforehand to anonymize or de-identify it. This notion aligns with the typical definition of “personal information” or “personal data” in relevant privacy legislation, which inherently links to the user’s identity.
    • Non-linked data: this data is not associated with the user’s identity.

    Tracking often includes sharing data with data brokers—entities that compile detailed user profiles based on demographics, behavioral patterns, and interests, and then sell this information to various organizations—the applications of this data range from advertising and market research to financial risk assessment and beyond.

    Surfshark’s study on data privacy practices in popular apps has revealed significant concerns regarding user data collection and tracking. Apps like Wish and DoorDash, along with social media giants such as Facebook and Instagram, exemplify this invasive practice by collecting extensive data points, a large portion of which is used for tracking users. This situation underscores the urgent need for heightened awareness and stricter regulations around data privacy to protect users from potential misuse of their personal information.

    The post Uncovering the biggest data collectors: which companies are at the top? appeared first on Tech Wire Asia.

    ]]>
    Adobe spotlights generative AI’s role in Malaysian customer experience https://techwireasia.com/11/2023/what-does-adobe-generative-ai-data-show-about-malaysian-firms/ Mon, 06 Nov 2023 00:15:15 +0000 https://techwireasia.com/?p=235052 The latest Adobe generative AI study indicates Malaysian firms need to catch up in using the technology to boost customer experience. Generative AI is crucial for customer satisfaction and competitive edge in Malaysia. Adobe generative AI survey highlights a disconnect between companies and customers’ requirements. Malaysian corporations emphasize the importance of enhancing customer experiences as... Read more »

    The post Adobe spotlights generative AI’s role in Malaysian customer experience appeared first on Tech Wire Asia.

    ]]>
  • The latest Adobe generative AI study indicates Malaysian firms need to catch up in using the technology to boost customer experience.
  • Generative AI is crucial for customer satisfaction and competitive edge in Malaysia.
  • Adobe generative AI survey highlights a disconnect between companies and customers’ requirements.
  • Malaysian corporations emphasize the importance of enhancing customer experiences as a primary engine for growth, albeit confronting the obstacle of diminished budgets, a study from Adobe has highlighted. In an effort to economize, these companies have reduced their financial plans for marketing and enhancing the customer experience, with 41% having already made cuts and an additional 41% planning reductions in the coming year.

    To counterbalance the constraints of tighter budgets, Malaysian enterprises are employing technological solutions aimed at streamlining workflows (as reported by 63% of businesses) and integrating generative AI into their operations (as indicated by 49% of respondents). This is in comparison with a broader outlook in Southeast Asia, including countries like Malaysia, Singapore, and Thailand, where 64% are focusing on workflow efficiency technologies and 56% on adopting generative AI.

    The inference drawn from the research is that Malaysian companies are trailing in formalizing generative AI strategies, a trend that does not match the pace of consumer expectations and employee adoption.

    Falling behind or playing a different game?

    The participation—or lack thereof—of Malaysia in the AI Safety Summit aligns with these findings. It has come to light that a consortium of 29 nations, encompassing the likes of the US, the UK, China, Australia, Brazil, India, and the European Union bloc, have pledged collective efforts to avert the potentially grave repercussions—whether intentional or not—stemming from advanced AI systems.

    These countries have reached a consensus, acknowledging that while AI has the capacity to significantly better human lives, global peace, and prosperity, it simultaneously brings substantial risks, especially within the everyday sectors of life.

    The group of nations concurs that avant-garde AI applications possess the potential for misuse, particularly in sensitive areas such as cybersecurity and biotechnology. Even as individual nations may pursue their own regulatory measures, there’s a shared imperative to establish a common language for “classifications and categorizations of risk” associated with AI.

    Elon Musk backed British PM Rishi Sunak's decision to invite China to the AI Safety Summit 2023. Adobe generative AI.

    Elon Musk backed British PM Rishi Sunak’s decision to invite China to the AI Safety Summit 2023. (Source – X)

    A comprehensive strategy to address these AI-related risks will hinge on pinpointing mutual safety concerns and cultivating a collective, scientifically sound, and evidence-based grasp of these potential threats. Given Malaysia’s absence from the summit, conjecture suggests that the country may be channeling its resources into other areas, possibly preferring regional collaborations or focusing on national advancements in AI safety and ethics. Or it might be adopting a diplomatic stance that diverges from the summit’s goals and required commitments. Yet, policy or regulatory differences concerning AI might not resonate with the consensus or preferred methodologies at the summit.

    For the moment, these remain speculative assessments, in the absence of concrete explanations for Malaysia’s non-involvement in the summit.

    Adobe reveals: Malaysian consumer appetite for generative AI technology

    The Malaysian consumer’s readiness to embrace AI technology is noteworthy. The report underscores Malaysian enthusiasm for the ways in which generative AI can elevate products and services (with 47% in favor) and customer experiences (also at 47%). 37% of consumers in Malaysia perceive the adoption of generative AI as a critical factor for businesses to sustain competitiveness, a sentiment slightly stronger than the overall 35% seen across Southeast Asia.

    Within the corporate realm, 95% of Malaysian employees have utilized generative AI for marketing and customer engagement initiatives. This contrasts with a mere 38% of Malaysian professionals who report that their employers actively use generative AI tools. These employees are applying AI tools like text-to-image generators for crafting promotional materials and campaign concepts (54%), as well as harnessing conversational AI for copy generation (38%) and research and insights (over half at 55%).

    This trend of employee generative AI usage is consistent across Southeast Asia, where 95% report using these tools in marketing efforts, although only 42% indicate their companies have adopted such technologies.

    Ethical AI use in the face of rapid technological evolution

    Simon Dale, Adobe’s vice president & managing director for Southeast Asia & Korea, emphasizes the workforce’s widespread use of generative AI, highlighting an imperative for organizations to establish guidelines and ethical standards for AI utilization proactively.

    Dale cautions, “As generative AI technologies continue to evolve, an absence of a set of strong guardrails and AI ethics principles can pose risks to the organization and even erode consumer trust.”

    Malaysian brands, within the scope of an economically challenging climate, appear to underestimate elements crucial for fostering consumer trust and financial patronage, specifically data security, ecological sustainability, and the inclusivity of their offerings.

    The Adobe generative AI study indicates that in Malaysia, nearly half of consumers place their loyalty and increased spending with brands they trust. The paramount factor for winning consumer trust lies in protecting data privacy and respectful data use. This is closely followed by offering products and services that are both beneficial for customers and the environment and are ethically produced. These trust-building factors also have the potential to amplify customer spending with a brand.

    Consumers place their loyalty and increased spending with brands they trust.

    Consumers place their loyalty and increased spending with brands they trust. (Source – Shutterstock)

    Conversely, 86% of Malaysian consumers would curtail their spending with brands that fail to safeguard their data and honor their privacy, with 43% willing to withdraw their spending completely. An experience that lacks accessibility for individuals with disabilities will cause 85% to reduce their spending, and 88% will decrease their financial support if a brand fails to meet sustainability standards and regulations.

    Despite the clear influence on consumer spending habits, a surprising 45% of Malaysian brands do not regard data security as critical for attracting and maintaining a customer base. This perspective is even more pronounced regarding the importance of accessible and sustainable products and services, with 49% and 53% of brands, respectively, overlooking these factors.

    Simon Dale from Adobe points out that consumers are increasingly aware of data privacy issues and the need for brands to deliver on promises of sustainability and accessibility. “To maintain brand trust amid market shifts and evolving consumer behavior, brands must demonstrate responsible practices and social accountability,” Dale advises. This entails not only utilizing emerging digital technologies to bolster engagement, but also ensuring the safety of consumer data.

    The post Adobe spotlights generative AI’s role in Malaysian customer experience appeared first on Tech Wire Asia.

    ]]>