Arkadian Cybersecurity
page-banner-shape-1
page-banner-shape-2
Typosquatting and deepfake-enabled attacks

The New Wave of Cyber Threats: Typosquatting and Deepfake Attacks Targeting Your Data

Introduction

The cybersecurity landscape is evolving at an alarming rate, with two particularly insidious threats gaining prominence: typosquatting and deepfake-enabled attacks. These sophisticated social engineering techniques are no longer theoretical concerns—they’re actively draining billions from organizations and individuals worldwide.

According to Zscaler ThreatLabz’s 2024 analysis, researchers examined over 30,000 lookalike domains between February and July 2024, discovering that more than 10,000 were malicious[1]. Meanwhile, deepfake fraud attempts surged by 3,000% in 2023 alone, with financial losses from deepfake-enabled fraud exceeding $200 million in just the first quarter of 2025[2].

This article examines both threats through real-world attack examples, explores the techniques criminals use, and provides evidence-based defense strategies recommended by cybersecurity authorities including NIST and CISA.

Understanding Typosquatting

What Is Typosquatting?

Typosquatting, also known as URL hijacking or domain spoofing, is a form of cybersquatting that exploits the simple human error of mistyping a website address[3]. The attack capitalizes on common typos, misspellings, or visual confusion when users enter URLs directly into their browser’s address bar.

Cornell University defines typosquatting as “the process of acquiring misspellings of a domain name in the hopes of catching and exploiting traffic intended for another website”[4].

How Typosquatting Works

The attack follows a straightforward but effective pattern:

1. **Domain Registration**: Attackers register misspelled domain names for just a few dollars through domain registrars. The low cost of domain registration makes typosquatting incredibly profitable [5].

2. **DNS Resolution**: When users type a URL, their browser requests the Domain Name System (DNS) to find the corresponding IP address. If there’s a typo, the DNS resolves the misspelled domain, redirecting the user to the attacker’s server[5].

3. **Malicious Hosting**: The fraudulent website is hosted on attacker-controlled servers, often using free hosting services, virtual private servers (VPS), or content delivery networks (CDNs) to make them difficult to trace and shut down[5].

Six Main Typosquatting Techniques

According to the Multi-State Information Sharing and Analysis Center (MS-ISAC), there are six primary typosquatting variations[6]:

**1. Misspelling**: Common typing errors like “mcirosoft.com” instead of “microsoft.com”

**2. Substitution**: Replacing letters with visually similar characters, such as replacing “0” (zero) with “O” (letter O)

**3. Omission**: Leaving out a letter, like “gogle.com” instead of “google.com”

**4. Insertion**: Adding an extra letter, such as “amazzon.com”

**5. Hyphenation**: Adding or removing hyphens, like “face-book.com”

**6. Homograph Attack**: Using characters from different alphabets that look identical. For example, using Cyrillic “а” instead of Latin “a”—visually indistinguishable but technically different characters[6].

Real-World Typosquatting Examples

**Goggle.com**: One of the most notorious cases involved “goggle.com,” a typosquatted version of Google. In 2006, McAfee highlighted this domain as it installed significant amounts of malware through drive-by downloads, including a rogue anti-spyware program called SpySheriff[7]. By 2018, the site redirected users to adware pages, and attempts to access it through private DNS resolvers resulted in the page being blocked as malware[7].

**Jacquemus Fashion Brand**: In 2020, French fashion designer Simon Porte Jacquemus successfully sued the owner of “jacqumus.com” (note the missing “e”). The typosquatting site was created to exploit the brand name and infect users’ devices with malware[8].

**Twitter Scam**: In 2013, the domain “twiter.com” redirected users to a survey scam that tricked thousands of visitors into providing personal information before being removed[9].

The Scale of the Problem

Tim Helming, security evangelist at DomainTools, reports that his company observes hundreds of squatting domain attempts every day. “In the last 24 hours I observed 11 domains spoofing iCloud, and several of them included the term ‘support,’ which strongly hints at credential harvesting,” he explains. “Multiply this by the hundreds or thousands of well-known company names out there and you can see how extensive this activity is”[10].

Recent data from Zscaler shows the top targeted brands:
– Google: 28.8% of typosquatting attempts
– Microsoft: 23.6%
– Amazon: 22.3%
– Meta: 4%[1]

Criminal Motivations Behind Typosquatting

According to Splunk’s cybersecurity research, over 18% of registered squatting domains are malicious and used to distribute malware or conduct phishing attacks[9]. Criminals use typosquatted domains for:

– **Phishing and credential theft**: Fake login pages capture usernames, passwords, and financial information
– **Malware distribution**: Drive-by downloads that install malicious software
– **Ad fraud**: Monetizing misdirected traffic through advertising revenue
– **Brand impersonation**: Damaging competitor reputations or selling counterfeit goods
– **Email harvesting**: Collecting misaddressed emails sent to typo domains

The Rising Threat of Deepfake Attacks

What Are Deepfakes?

Deepfakes are synthetic media created using artificial intelligence—particularly generative AI and deep learning—to manipulate or fabricate video, audio, images, or text content. The technology can convincingly replicate a person’s appearance, voice, and mannerisms, making fraudulent content nearly impossible to detect with the naked eye.

The Explosive Growth of Deepfake Threats

The statistics are sobering:

– Deepfake files surged from 500,000 in 2023 to a projected 8 million in 2025[2]
– Fraud attempts increased 3,000% in 2023[2]
– Voice deepfakes rose 680% in a single year[2]
– Nearly 83% of phishing emails are now AI-generated[11]
– 49% of organizations faced losses tied to deepfake incidents in 2024, up from 37% in 2023 and 29% in 2022[12]
– Average damages from deepfake attacks exceeded $450,000[12]
– Global financial losses from phishing hit $17.4 billion in 2024, representing a 45% year-over-year increase[11]

According to Verizon’s 2023 Data Breach Investigations Report, 74% of data breaches involve human elements, including social engineering attacks—a category where deepfakes excel[13].

Real-World Deepfake Attack Examples

**The $25 Million Hong Kong Heist (February 2024)**

In one of the most sophisticated deepfake attacks to date, a finance worker at British engineering firm Arup transferred $25.6 million to accounts controlled by fraudsters after joining a video conference call. The employee believed they were speaking with the company’s CFO and several colleagues—but every participant on that call was a deepfake[12][14].

This wasn’t a simple voice manipulation. The attackers had created convincing video deepfakes of multiple senior executives, complete with realistic facial movements and mannerisms, all orchestrated in real-time during a video call.

**Ferrari’s Close Call (July 2024)**

Ferrari narrowly avoided a major loss when scammers used deepfake voice technology to impersonate CEO Benedetto Vigna. The attackers replicated Vigna’s distinctive southern Italian accent and attempted to pressure finance executives into making a large transfer.

The fraud was uncovered when an alert employee asked the caller to reference a book that Mr. Vigna had recently recommended—something the AI could not answer. Ferrari has since introduced knowledge-based authentication for all high-value transactions[15].

**YouTube CEO Impersonation (2025)**

A phishing campaign used a deepfake video of YouTube CEO Neal Mohan embedded in a fake “YouTube Creators” portal to trick content creators into entering their login credentials. The video appeared genuine and urged immediate verification to avoid account suspension[15].

**Italian Executive Scam (Early 2025)**

Several prominent Italian executives were duped by deepfake impersonations of political figures, including Defence Minister Guido Crosetto. The scammers claimed to be raising urgent funds to rescue Italian journalists abroad, successfully extracting at least €1 million from one company[15].

**Cryptocurrency Livestream Fraud (2024)**

Cybercriminals hijacked popular YouTube channels and streamed deepfake videos of Elon Musk and Michael Saylor promising to “double” any cryptocurrency sent to their wallet addresses. Victims collectively lost more than $600,000[15].

Multi-Channel Deepfake Attacks

Modern deepfake attacks rarely rely on a single channel. According to Adaptive Security, attackers now orchestrate sophisticated multi-channel campaigns:

1. Initial contact via email or text message
2. Follow-up with voice notes featuring cloned voices
3. Escalation to video calls with deepfake video
4. Creating artificial urgency and authority[12]

In 2024 alone, at least five FTSE 100 companies, including WPP and Octopus Energy, reported that their CEOs had been impersonated in deepfake scams[12].

The Technology Behind Deepfakes

According to IBM research, creating a deepfake is remarkably affordable. The average cost of creating a deepfake is just $1.33, yet the expected global cost of deepfake fraud in 2024 reached $1 trillion[16].

Google Trends data shows that searches for “free voice cloning software” rose 120% between July 2023 and 2024. Users don’t need extensive technical skills—three seconds of audio is sometimes all that’s needed to produce an 85% voice match from the original to a clone[17].

DeepFaceLab claims that more than 95% of deepfake videos are created with its open-source software[17].

Why Deepfakes Are So Effective

Research shows that false news and rumors spread faster than truthful news—explaining how deepfakes can be so effective. In one study, the top 1% of rumors on Twitter reached between 1,000 to 100,000 people, while truthful news rarely reached more than 1,000 people[17].

Human detection rates are dismal: people can identify high-quality deepfake videos correctly only 24.5% of the time[18]. Even trained professionals struggle with sophisticated deepfakes.

Deepfake Vishing: The Voice Attack Vector

According to Right-Hand AI’s 2025 State of Deep Fake Vishing report, deepfake vishing (voice phishing) has surged by over 1,600% in the first quarter of 2025 compared to the end of 2024[19].

Attackers now use platforms like Xanthorox AI, which automate both voice cloning and live call delivery. These tools integrate seamlessly with enterprise VoIP and collaboration platforms like Microsoft Teams, Zoom, and traditional phone systems, allowing attackers to impersonate colleagues and blend in with real workflows[19].

Organized Crime and Deepfakes

Several organized cybercrime groups have emerged as major deepfake threat actors:

**The Com**: A sprawling syndicate spanning Australia, North America, and Southeast Asia, executing complex multi-channel campaigns combining voice impersonation with smishing and phishing. In April 2025, they successfully breached several Australian banks by spoofing vendor payment approvals[19].

**Lazarus Group**: Known for state-sponsored espionage, this group has turned deepfake vishing into a tool for strategic data theft. In South Korea, attackers posed as energy executives to extract proprietary project files from national infrastructure firms[19].

**SilverPhantom**: A Latin American collective that emerged in 2024 targeting romance scams but shifted in 2025 to corporate procurement fraud. They have repeatedly targeted procurement teams in Brazil and Argentina, using synthetic voices to reroute supplier payments[19].

Typosquatting and deepfake-enabled attacks – Defense Strategies

Organizational Defenses

**1. Defensive Domain Registration**

The most effective prevention strategy is to register domain variations before attackers can. According to UpGuard’s cybersecurity recommendations:

– Register common misspellings and typographical errors of your primary domain
– Secure phonetic approximations and alternate spellings
– Register variants with and without hyphens
– Acquire different country extensions and relevant top-level domains (TLDs)
– Redirect all registered variants to your official website[20]

**2. Utilize the Trademark Clearinghouse (TMCH)**

Register your brand name with the TMCH, a central database for verified trademarks established by ICANN. This provides priority access to register domains during new generic top-level domain (gTLD) launches[20].

**3. Continuous Monitoring**

According to Huntress’s cybersecurity guidance, organizations should:

– Use domain monitoring services like DNSTwist to scan for newly registered domains similar to your brand
– Monitor Certificate Transparency Logs to identify rogue SSL certificates
– Set up alerts for suspicious domain registrations
– Watch for anomalies in DNS records or spikes in failed login attempts[21]

**4. Email Authentication Protocols**

Implement SPF (Sender Policy Framework), DKIM (DomainKeys Identified Mail), and DMARC (Domain-based Message Authentication, Reporting, and Conformance) to authenticate your email domains and prevent spoofing[21].

**5. Legal Action**

Under the U.S. Anticybersquatting Consumer Protection Act (ACPA) of 1999, trademark holders can take legal action against typosquatters. The ACPA prohibits the bad-faith registration of domain names that are identical or confusingly similar to existing trademarks[22].

Additionally, through ICANN’s Uniform Domain-Name Dispute-Resolution Policy (UDRP), trademark holders can file cases at the World Intellectual Property Organization (WIPO) against typosquatters[7].

Individual User Protections

According to MS-ISAC Security guidelines, users should:

**1. Verify Links Before Clicking**: Hover over links to check the actual destination URL before clicking[6]

**2. Use Bookmarks**: Navigate to important sites through saved bookmarks rather than typing URLs directly[6]

**3. Double-Check URLs**: Carefully inspect URLs for typos, especially when entering sensitive information[6]

**4. Use Search Engines**: When unsure of a URL, use a search engine to find the legitimate website rather than typing directly[6]

**5. Enable Browser Protections**: Microsoft Edge includes a typosquatting checker that warns users if they may have mistyped a common web address and could be directed to a malicious site[23]

**6. Use WHOIS Lookups**: Verify the legitimacy of domains using WHOIS lookups and web reputation tools[6]

NIST and CISA Recommendations

In their April 2021 joint guidance “Defending Against Software Supply Chain Attacks,” NIST and CISA highlighted typosquatting as a significant threat to software supply chains. In 2018, researchers discovered twelve malicious Python libraries uploaded to the Python Package Index (PyPI) using typosquatting tactics to spoof popular libraries such as “diango,” “djago,” or “dajngo” for Django[24].

The agencies recommend:
– Implementing Secure Software Development Frameworks (SSDF)
– Following risk-based approaches to development activities
– Carefully verifying library and package names before installation
– Using automated tools to detect dependency confusion attacks[24]

Defense Strategies: Protecting Against Deepfakes

Organizational Defenses

**1. Multi-Factor Authentication (MFA) and Enhanced Verification**

NIST’s updated Digital Identity Guidelines (SP 800-63 Revision 4) now require phishing-resistant authentication methods and biometrics with liveness detection[25]. Organizations should:

– Implement challenge-response verification for high-value transactions
– Use knowledge-based authentication (KBA) with information only the real person would know
– Require multiple approval signatures for large financial transfers
– Establish out-of-band verification for unusual requests (e.g., calling the person back on a known number)

**2. Biometric Security with Liveness Detection**

According to NIST guidelines, biometric authentication must include liveness detection to verify that the biometric trait comes from a real, present person rather than a replayed recording or manipulated image[25].

Advanced liveness detection techniques include:
– 3D depth sensing using structured light or time-of-flight sensors
– Thermal imaging to detect real skin temperature
– Analysis of micro-expressions and gaze patterns
– Detection of natural physiological responses like pulse or breathing[26]

**3. Continuous Authentication**

Rather than one-time verification at login, continuous authentication monitors identity throughout active sessions through:
– Keystroke dynamics (typing rhythm and speed)
– Mouse movement patterns
– Gait recognition from smartphone sensors
– Behavioral biometrics[26]

**4. AI-Powered Detection Tools**

NIST’s participation in deepfake detection evaluations has validated several detection approaches:

– **Voice authentication systems**: Companies like Pindrop have achieved exceptional performance in NIST evaluations, with detection models that can distinguish between real and AI-generated content with high accuracy[27]

– **Face morphing detection**: NIST’s FATE MORPH 4B guidelines show that single-image detection can achieve up to 100% accuracy at 1% false detection rates when trained on examples from the morphing software used[28]

– **Compression artifact analysis**: Tools designed to look for compression artifacts, as lossy compression in media creates detectable patterns[29]

**5. Digital Watermarking and Content Authentication**

According to NIST AI 100-4 guidance on “Reducing Risks Posed by Synthetic Content,” organizations should implement strategies to protect the authenticity and integrity of their own content through:

– Digital watermarking embedded in legitimate content
– Cryptographic hashing and fingerprinting using metadata
– Content provenance tracking systems
– Blockchain-based verification for critical communications[30]

**6. Employee Training and Awareness**

The Joint Cybersecurity Information (CSI) sheet from NSA, FBI, and CISA emphasizes that technology alone is insufficient. Organizations must implement human-in-the-loop preparation[29].

According to Adaptive Security research, after approximately a dozen simulation rounds, employee detection success surged from 34% to 74%[12]. Effective training should include:

– Regular simulated deepfake attacks in safe environments
– Training across multiple channels (email, voice, video)
– Practice with realistic scenarios that employees will actually face
– Emphasis on verification protocols before taking action
– Updates as new attack techniques emerge

**7. Incident Response Plans**

Organizations should prepare for deepfake attacks just like any other cybersecurity incident:

– Establish clear escalation procedures for suspicious communications
– Create response teams trained in deepfake identification
– Develop communication protocols for confirming unusual requests
– Maintain relationships with law enforcement and cybersecurity firms
– Document and share lessons learned from attempted attacks

Policy and Procedural Controls

**1. Financial Transaction Policies**

Ferrari’s successful defense against a deepfake CEO attack demonstrates the value of robust procedures:
– Require knowledge-based authentication for high-value transactions
– Mandate call-backs on pre-verified numbers for unusual requests
– Implement dual approval requirements for large transfers
– Set transaction limits that trigger additional verification[15]

**2. Communication Verification Protocols**

According to the Department of Defense’s “Contextualizing Deepfake Threats to Organizations” report, organizations should establish protocols such as:

– Pre-agreed code words or phrases for emergency situations
– Verification questions based on shared recent experiences
– Challenge-response systems for video/voice communications
– Documentation requirements for unusual executive requests[29]

NIST Standards and Frameworks

NIST’s comprehensive approach to synthetic content threats includes:

**Detection Technologies**:
– Real-time verification capabilities
– Passive detection techniques that don’t require watermarks
– Active detection methods that analyze content for manipulation artifacts[30]

**Prevention Strategies**:
– Metadata protection and verification
– Content authentication at creation
– Secure channels for high-integrity communications[30]

**Testing and Validation**:
– The Open Media Forensics Challenge (OpenMFC) provides standardized evaluation datasets for testing detection systems
– Multiple task categories including manipulation detection, deepfake detection, and steganography detection[31]

Individual User Protections

For individuals, Security.org recommends:

**1. Verify Before Acting**: If something seems too good to be true (celebrity giveaways, urgent requests from family), it probably is[17]

**2. Use Multiple Communication Channels**: If you receive an unusual request via one channel, verify it through another independent channel[17]

**3. Be Skeptical of Urgency**: Deepfake scams often create artificial time pressure. Legitimate requests can wait for verification[17]

**4. Protect Your Digital Footprint**: Limit publicly available audio and video of yourself to make it harder for criminals to create convincing deepfakes[17]

**5. Report Suspicious Content**: Report suspected deepfakes to platform providers and, when appropriate, to law enforcement[17]

The Future of typosquatting and deepfake-enabled attacks

Evolving Attack Sophistication

As generative AI becomes more accessible and powerful, both typosquatting and deepfake attacks will continue to evolve. The proliferation of new top-level domains (TLDs) like .xyz or .coffee creates hundreds of thousands of new opportunities for typosquatters[5].

Meanwhile, the cost of creating convincing deepfakes continues to drop while quality improves. The FBI’s Internet Crime Complaint Center (IC3) reported approximately 200,000 phishing and spoofing incidents in 2024, with AI-enhanced attacks becoming increasingly difficult to detect[11].

Regulatory Response

The regulatory landscape is adapting to these threats:

– The U.S. Anticybersquatting Consumer Protection Act (ACPA) provides legal remedies for typosquatting
– Various states are implementing deepfake-specific legislation
– The NO AI FRAUD Act has been introduced to address deepfake intellectual property concerns
– NIST continues developing standards and evaluation frameworks for synthetic content detection[30]

The Role of AI in Defense

Ironically, the same AI technology that enables deepfakes is also key to defending against them. Machine learning models can detect subtle artifacts that human observers miss. NIST evaluations show that well-trained AI detection systems can achieve over 90% accuracy in identifying synthetic content[27][28].

Conclusion

Typosquatting and deepfake attacks represent two sides of the same coin: cybercriminals exploiting human psychology and technological vulnerabilities to steal data, money, and trust. The statistics are clear—these threats are growing exponentially, and no organization or individual is immune.

However, the defense strategies outlined in this article, backed by research from NIST, CISA, and leading cybersecurity organizations, provide a roadmap for protection:

**For Organizations**:
– Implement defensive domain registration and continuous monitoring
– Deploy AI-powered detection tools with human oversight
– Establish robust verification protocols for high-value transactions
– Train employees through realistic simulation exercises
– Prepare incident response plans specific to these threats

**For Individuals**:
– Verify links before clicking
– Use bookmarks for important sites
– Be skeptical of urgent requests, especially involving money
– Verify unusual communications through independent channels
– Report suspicious activity

The battle against typosquatting and deepfakes is not one that can be won by technology alone. It requires a combination of technical controls, policy frameworks, user education, and constant vigilance. As Tim Helming from DomainTools noted, organizations face hundreds of squatting domain attempts every day[10]. Similarly, deepfake attacks now occur at a rate of one every five minutes[18].

By understanding these threats, implementing evidence-based defenses, and maintaining awareness of emerging attack techniques, organizations and individuals can significantly reduce their risk. The key is not to react after an attack succeeds, but to proactively build resilient defenses that make these attacks far more difficult and expensive for criminals to execute.

The cybersecurity community, researchers, and technology providers continue developing more sophisticated detection and prevention tools. However, the human element—our ability to question, verify, and remain skeptical when something doesn’t feel right—remains our strongest defense against these evolving threats.

References

[1]: Zscaler ThreatLabz. (2024). “Phishing, Typosquatting and Brand Impersonation Trends and Tactics.” Retrieved from https://www.zscaler.com/blogs/security-research/phishing-typosquatting-and-brand-impersonation-trends-and-tactics

[2]: Keepnet Labs. (2025). “Deepfake Statistics & Trends 2025.” Retrieved from https://keepnetlabs.com/blog/deepfake-statistics-and-trends

[3]: UpGuard. (2020). “Typosquatting Explained with Real-World Examples.” Retrieved from https://www.upguard.com/blog/typosquatting

[4]: InfoSec Insights. (2021). “What Is Typosquatting? Examples & Protection Tips.” Retrieved from https://sectigostore.com/blog/what-is-typosquatting/

[5]: UpGuard. (2020). “Typosquatting Explained with Real-World Examples.” Retrieved from https://www.upguard.com/blog/typosquatting

[6]: MS-ISAC. (2019). “Security Primer – Typosquatting.” Center for Internet Security. Retrieved from https://www.cisecurity.org/insights/white-papers/ms-isac-security-primer-typosquatting

[7]: Wikipedia. (2025). “Typosquatting.” Retrieved from https://en.wikipedia.org/wiki/Typosquatting

[8]: InfoSec Insights. (2021). “What Is Typosquatting? Examples & Protection Tips.” Retrieved from https://sectigostore.com/blog/what-is-typosquatting/

[9]: Splunk. “Typosquatting & How To Prevent It.” Retrieved from https://www.splunk.com/en_us/blog/learn/typosquatting-types-prevention.html

[10]: CSO Online. (2020). “What is typosquatting? A simple but effective attack technique.” Retrieved from https://www.csoonline.com/article/570173/what-is-typosquatting-a-simple-but-effective-attack-technique.html

[11]: Kelser Corp. “How Phishing Attacks Are Evolving With AI And Deepfakes In 2025.” Retrieved from https://www.kelsercorp.com/blog/how-phishing-attacks-evolved-ai-2025

[12]: Adaptive Security. (2025). “Deepfake Phishing: The Next Evolution in Cyber Deception.” Retrieved from https://www.adaptivesecurity.com/blog/deepfake-phishing

[13]: Authentic8. (2024). “What is typosquatting? Attack examples and defense strategies.” Retrieved from https://www.authentic8.com/blog/what-is-typosquatting

[14]: DeepStrike. (2025). “Deepfake Statistics 2025: The Data Behind the AI Fraud Wave.” Retrieved from https://deepstrike.io/blog/deepfake-statistics-2025

[15]: PhishCare. “Top 10 Deepfake Phishing Scams (2026).” Retrieved from https://phishcare.com/top-10-deepfake-phishing-scams/

[16]: IBM. (2025). “How a new wave of deepfake-driven cyber crime targets businesses.” Retrieved from https://www.ibm.com/think/insights/new-wave-deepfake-cybercrime

[17]: Security.org. (2024). “2024 Deepfakes Guide and Statistics.” Retrieved from https://www.security.org/resources/deepfake-statistics/

[18]: DeepStrike. (2025). “Deepfake Statistics 2025: The Data Behind the AI Fraud Wave.” Retrieved from https://deepstrike.io/blog/deepfake-statistics-2025

[19]: Right-Hand AI. (2025). “The State of Deep Fake Vishing Attacks in 2025.” Retrieved from https://right-hand.ai/blog/deep-fake-vishing-attacks-2025/

[20]: UpGuard. (2020). “Typosquatting Explained with Real-World Examples.” Retrieved from https://www.upguard.com/blog/typosquatting

[21]: Huntress. “What is Typosquatting? Domain-Based Deception Explained.” Retrieved from https://www.huntress.com/cybersecurity-101/topic/what-is-typosquatting

[22]: Centraleyes. (2022). “Do Any Laws Apply to Typosquatting and Cybersquatting?” Retrieved from https://www.centraleyes.com/question/do-any-laws-apply-to-typosquatting-and-cybersquatting/

[23]: Microsoft Support. “What is typosquatting?” Retrieved from https://support.microsoft.com/en-us/topic/what-is-typosquatting-54a18872-8459-4d47-b3e3-d84d9a362eb0

[24]: CPO Magazine. (2021). “NIST and CISA Release Guidelines for Organizations and Vendors To Defend Against Software Supply Chain Attacks.” Retrieved from https://www.cpomagazine.com/cyber-security/nist-and-cisa-release-guidelines-for-organizations-and-vendors-to-defend-against-software-supply-chain-attacks/

[25]: Identity.com. (2025). “Breaking Down NIST’s Updated Digital Identity Guidelines.” Retrieved from https://www.identity.com/nists-updated-digital-identity-guidelines/

[26]: PMC. “Unmasking digital deceptions: An integrative review of deepfake detection, multimedia forensics, and cybersecurity challenges.” Retrieved from https://pmc.ncbi.nlm.nih.gov/articles/PMC12508882/

[27]: Pindrop. (2025). “NIST Evaluation Results in Deepfake Detection.” Retrieved from https://www.pindrop.com/article/nist-evaluation-results-deepfake-detection/

[28]: Infosecurity Magazine. (2025). “NIST Unveils Guidelines to Help Spot Face Morphing Attempts.” Retrieved from https://www.infosecurity-magazine.com/news/nist-unveils-guidelines-spot-face/

[29]: Department of Defense. (2023). “Joint CSI: Contextualizing Deepfake Threats to Organizations.” Retrieved from https://media.defense.gov/2023/Sep/12/2003298925/-1/-1/0/CSI-DEEPFAKE-THREATS.PDF

[30]: Epstein Becker Green. “Preparing for the Cybersecurity and Fraud Risks of Deepfakes: What Executive Teams Need to Know.” Retrieved from https://www.healthlawadvisor.com/preparing-for-the-cybersecurity-and-fraud-risks-of-deepfakes-what-executive-teams-need-to-know

[31]: NIST. “Open Media Forensics Challenge (OpenMFC).” Retrieved from https://mfc.nist.gov/

Leave a Reply

Your email address will not be published. Required fields are marked *