The Emergence
The Emergence
Ensuring Digital Authenticity with Trust Watermarking
0:00
-35:35

Ensuring Digital Authenticity with Trust Watermarking

Ensuring Digital Authenticity with Trust Watermarking

The Importance of Authenticity

Authenticity in digital content is paramount to combating misinformation and ensuring trust in online interactions.

Now that we have entered the age of Generative AI, it is impossible not to question whether what we see, read, or hear is real or not. Even though we are too early into this new reality to see the full effects of what could come of this, a new era of fakery is upon us, the likes of which have never been seen before. From bad actors cloning voices of loved ones to extort money, to manipulated videos of politicians potentially creating a worldwide tragedy, to AI generating forged documents, the list is endless in the potential dangers involved in the unchecked business of fake content. 

What could be done about this?

What should be done? I fervently believe we need to adopt a universal watermarking standard to identify whether content is original, modified by the author, modified by another party or modified by AI. Full disclosure, I have no idea if such standards are in the works as I am sure this has been bandied about for years. But what is clear is that there is not a standard in place today and as we witness almost unbelievable leaps in Generative AI it seems it would be wise to pick up the pace. 

Insights from Chat GPT

What follows is what has surfaced through my many conversations with Chat GPT on the subject. First and foremost, I have been wildly impressed in what I have learned about this subject in a fairly short span of time. But what is glaringly obvious is that the more time we spend on questions with Generative AI, the better the outcomes we will see.

Below is my conversation with Chat GPT on the concept of Ensuring Digital Authenticity with Trust Watermarking. 

Understanding Trust Watermarking

Trust watermarking involves embedding invisible markers within digital content, allowing for the verification of its authenticity.

PROMPT
Based on our conversations on trust watermarking, could you describe what it is and could you provide examples of the looming threats in front of us without such measures in place?

CHAT GPT
Trust watermarking is a proposed system designed to embed markers within digital content that indicate its authenticity and history of modifications. This system would provide a clear indication of whether the content is original, has been modified by the author, altered by a third party, or generated by AI. It acts as a digital signature, verifying the integrity and origin of data, text, images, and video.

Without such measures in place, the digital landscape faces significant threats. Imagine a scenario where AI-generated news articles spread false information about a public health crisis, leading to widespread panic and misinformed decisions by the public. In another instance, deepfake technology could be used to create convincing videos of corporate executives making false statements, potentially crashing stock prices and undermining investor confidence. Additionally, academic research papers could be altered by unauthorized parties, jeopardizing the integrity of scientific advancements and misleading the research community.

These examples highlight the urgent need for a system like trust watermarking to ensure the authenticity and integrity of digital content. By implementing such measures, we can protect ourselves from the potentially devastating consequences of unchecked digital fakery, fostering a more secure and trustworthy digital environment.

PROMPT
Why would trust watermarking be important in the consumption of data, text, and media?

CHAT GPT
Trust watermarking is crucial in the consumption of data, text, and media because it provides a framework for verifying the authenticity and integrity of digital content. In an era where information is consumed rapidly and often without scrutiny, trust watermarking acts as a safeguard against misinformation and manipulation.

For data, trust watermarking ensures that datasets used in critical fields such as healthcare, finance, and scientific research are authentic and unaltered. Imagine a scenario where patient records are tampered with, leading to incorrect diagnoses and treatments, or financial data is manipulated, resulting in erroneous market decisions. Trust watermarking can prevent these scenarios by verifying the integrity of the data from its source to its end use.

In the realm of text, Trust Watermarking can protect the credibility of journalism and academic research. With the rise of deepfakes and AI-generated articles, it becomes increasingly difficult to distinguish between genuine and fake news. Trust watermarking can help readers verify the authenticity of articles and ensure that academic papers have not been altered post-publication, maintaining the integrity of scholarly work.

When it comes to media, including images and videos, the implications are even more profound. Visual content is highly influential and can shape public opinion and behavior. Without trust watermarking, altered images and videos can be used to deceive viewers, spread propaganda, or create false evidence. For instance, doctored images could be used to manipulate historical records or to fabricate events that never occurred. Trust watermarking provides a way to verify the authenticity of such content, ensuring that what we see is genuine and reliable.

Trust watermarking is essential for maintaining the credibility and reliability of digital content across all forms—data, text, and media. It empowers consumers to make informed decisions, fosters trust in digital interactions, and protects society from the potentially harmful effects of digital manipulation and misinformation.

Understanding a Related Standard

The UTF-8 standard is a model to consider for trust watermarking to be applied universally and efficiently across diverse digital content.

PROMPT
What is the UTF-8 standard?

CHAT GPT
UTF-8, or Unicode Transformation Format - 8-bit, is a character encoding standard that allows for the representation of text from virtually every writing system in the world. It uses a variable-length encoding scheme, where characters are encoded using one to four bytes. This flexibility makes UTF-8 efficient for encoding a wide range of characters: commonly used characters (such as those in the ASCII set) are stored in a single byte, while less common characters require more bytes. This system is backward compatible with ASCII, meaning that any valid ASCII text is also valid UTF-8 text.

The adoption of UTF-8 has been widespread due to its efficiency, compatibility, and ability to handle the diverse needs of global digital communication. It supports the full range of Unicode characters, ensuring that text in any language can be correctly represented and processed. This universality and interoperability have made UTF-8 the preferred encoding standard for the web and modern computing, allowing seamless text exchange across different platforms and applications.

PROMPT
How could the UTF-8 encoding standard serve as a model to illustrate the potential of Trust Watermarking as a standard for embedding and verifying trust in the digital landscape of data, text, images, and video? Consider how UTF-8's universality, efficiency, and interoperability could apply to Trust Watermarking, enhancing transparency and reliability in digital interactions. Discuss the implications of adopting Trust Watermarking across diverse platforms and applications, and its role in fostering a more secure and trustworthy digital environment.

CHAT GPT
The UTF-8 encoding standard serves as an exemplary model for Trust Watermarking, showcasing how a well-designed standard can revolutionize digital interactions by embedding and verifying crucial information. UTF-8's universality, which allows it to represent characters from virtually every writing system, parallels the need for Trust Watermarking to be a universally recognized and adopted standard. Just as UTF-8 can seamlessly encode text across diverse languages and platforms, Trust Watermarking can embed Trust Signals in various types of digital content—data, text, images, and video—ensuring their authenticity and integrity.

UTF-8's efficiency in using variable-length encoding makes it a practical and scalable solution, able to handle different characters with minimal overhead. Similarly, Trust Watermarking must be efficient in embedding markers within content without significantly increasing file sizes or processing requirements. This efficiency is crucial for widespread adoption, as it minimizes the impact on storage and transmission while providing robust verification mechanisms.

Interoperability is another key strength of UTF-8, enabling text encoded in this format to be correctly interpreted by different systems and applications worldwide. Trust Watermarking can draw from this principle by creating a standard that is interoperable across various digital platforms and applications. By embedding standardized Trust Signals, it ensures that any system can verify the content's authenticity, regardless of its origin or the tools used to create or modify it.

The adoption of Trust Watermarking, much like UTF-8, would enhance transparency and reliability in digital interactions. For instance, users could trust that news articles, research papers, or social media posts are genuine and unaltered. In a world increasingly susceptible to digital manipulation, this standard could prevent the spread of misinformation, protect intellectual property, and preserve the integrity of digital communications.

Implementing Trust Watermarking across diverse platforms and applications would require collaboration among industry leaders, similar to the collective efforts that drove the adoption of UTF-8. Media companies, software providers, device manufacturers, and web browsers would need to integrate these standards into their systems, ensuring a consistent approach to verifying content. This would foster a more secure and trustworthy digital environment, where users can confidently consume and share information, knowing its authenticity is guaranteed.

Enforcement of the Standard

How to enforce a Trust Watermarking standard 

PROMPT
What would it take to enforce a Trust Watermarking standard throughout the international community of media companies, device manufacturers, software providers, and web browsers?

CHAT GPT
Enforcing a Trust Watermarking standard globally would require a coordinated and multifaceted approach:

  • International Agreements and Regulations: Governments and regulatory bodies must establish mandatory Trust Watermarking standards, updating existing digital content and cybersecurity laws.

  • Industry Collaboration: Key industry players, including media companies, device manufacturers, software providers, and web browsers, need to agree on and integrate the standard, facilitated by industry consortiums.

  • Technical Integration: Develop robust infrastructure, including software libraries, APIs, and SDKs, to support watermarking across platforms, requiring updates to operating systems, content management systems, and web browsers.

  • Certification and Compliance: Independent organizations should certify products and services that adhere to the standard, with regular audits and recognizable trust marks for compliance.

  • Education Campaigns: Educate industry stakeholders on implementation and maintenance, and run public awareness campaigns to highlight the benefits and encourage demand for compliance.

  • Incentives and Penalties: Offer financial incentives for compliance and impose penalties for non-compliance to ensure adherence.

  • Interoperability and Open Standards: Promote open and interoperable standards to facilitate adoption and integration into existing workflows.

  • Continuous Improvement: Establish a governing body to regularly review and update the standard, ensuring it adapts to new challenges and technologies.

  • Global Standards Organizations: Involve organizations like W3C and ISO to formalize and promote the standard, lending credibility and encouraging adoption.

By addressing these areas, we can create a secure and trustworthy digital environment where content authenticity and integrity are consistently verified.

The Mechanisms of Trust Watermarking

Trust watermarking works by embedding unique identifiers that can be verified using specialized software.

PROMPT
Can you give a brief recap of our conversation on trust tagging in the metadata of devices, software and browsers and briefly discuss how the following metadata tagging structure could be used to support trust watermarking in the digital landscape?

<meta name="content-status"content="[Unaltered]">

<meta name="content-status"content="[Altered-by-source]">

<meta name="content-status"content="[Altered-by-third-party]">

<meta name="content-status"content="[Altered-by-ai]">

CHAT GPT
Trust tagging involves embedding metadata within digital content to indicate its status and history of modifications. This approach enhances transparency and allows users to verify the authenticity of the content they are consuming. By using metadata tags, we can provide crucial information about whether the content is original, modified by the author, altered by a third party, or generated by AI. This method leverages existing web technologies, making it feasible for widespread adoption across devices, software, and browsers.

Benefits of Trust Tagging

  • Enhanced Transparency: These tags provide immediate and clear indicators of the content's authenticity and modification history. Users can quickly ascertain whether the content is in its original form or has been altered, and if so, by whom.

  • User Trust and Confidence: By clearly labeling the content status, users can trust the information they are consuming. For example, an "Unaltered" tag assures the user that the content is original and untouched, while an "Altered-by-ai" tag alerts them to potential modifications by AI, prompting further scrutiny.

  • Content Verification: Integrating these tags into the metadata of digital content allows for automated verification processes. Browsers and software can be programmed to recognize and display these tags, providing users with real-time information about the content's trustworthiness.

  • Accountability and Traceability: By tagging content with detailed modification histories, we establish accountability. Content creators and modifiers are identifiable, making it harder for malicious actors to spread false information without detection.

  • Interoperability: These tags can be easily adopted across various platforms, ensuring that the trust status of content is consistently communicated regardless of the device or software used. This interoperability is crucial for widespread adoption and effective implementation of trust watermarking standards.

  • Facilitating Regulation and Compliance: Governments and regulatory bodies can use these tags to enforce digital content standards, ensuring that all published content adheres to transparency and authenticity guidelines. This can help curb the spread of misinformation and protect consumers from manipulated content.

Implementing a metadata tagging structure like this for Trust Watermarking can significantly enhance the transparency, reliability, and accountability of digital content. By embedding clear indicators of content status within metadata, we create a system that supports the verification and trustworthiness of data, text, images, and videos across the digital landscape.

PROMPT
Could you provide a simplified version of how video could be Trust Tagged even down to the timecode of the clips?

CHAT GPT
Certainly! Here's a simplified explanation of how video content can be trust tagged down to the timecode of the clips:

Simplified Version of Trust Tagging for Video

  • Embedding Metadata in Video Files

    • File-Level Metadata: The overall video file can contain metadata that indicates the general trust status of the entire video. This could include tags such as "Unaltered," "Altered-by-source," "Altered-by-third-party," or "Altered-by-ai."

    • Timecode-Based Metadata: Each frame or segment of the video can have specific metadata embedded, indicating the trust status of that particular segment. This is done using timecodes to pinpoint the exact start and end times of each tagged section.

  • Example Metadata Structure

File-Level Tagging:
<metadata>

  <content-status>Unaltered</content-status>

</metadata>

Timecode-Based Tagging:

<metadata>

  <segment start="00:00:00" end="00:01:00">

    <content-status>Unaltered</content-status>

  </segment>

  <segment start="00:01:01" end="00:02:00">

    <content-status>Altered-by-source</content-status>

  </segment>

  <segment start="00:02:01" end="00:03:00">

    <content-status>Altered-by-third-party</content-status>

  </segment>

  <segment start="00:03:01" end="00:04:00">

    <content-status>Altered-by-ai</content-status>

  </segment>

</metadata>

  • Implementation:

    • Encoding Metadata: During the video editing or encoding process, the software used can insert these metadata tags directly into the video file. Advanced video formats like MP4 or MKV support such metadata embedding.

    • Reading Metadata: Video players and editing software can be updated to read and display these metadata tags. For instance, a video player could show an overlay or provide information about the trust status of the current segment being viewed.

  • Timecode Precision:

    • By tagging video segments with precise timecodes, viewers can be informed about the trust status of specific parts of the video. This granularity helps users discern which sections are original and which have been altered, providing greater transparency and trust in the content.

Example Workflow

  • Video Creation: The video is created and edited by the content creator. During this process, they can add metadata tags to the video file indicating the trust status of each segment.

  • Metadata Embedding: The metadata is embedded using video editing software that supports timecode-based tagging.

  • Distribution: The video is published or distributed. The embedded metadata travels with the video file.

  • Playback: When users watch the video, their video player reads the metadata and can display trust information, such as overlays or notifications, indicating the trust status of each segment.

Benefits

  • Transparency: Users are informed about the authenticity and modification history of the video content they are viewing.

  • Trust: By providing detailed trust information, content creators can build trust with their audience.

  • Accountability: Any alterations to the video are clearly marked, holding editors and third parties accountable for changes.

In summary, trust tagging video content down to the timecode involves embedding metadata that indicates the trust status of each segment of the video. This provides a transparent, detailed view of the video's authenticity, enhancing trust and accountability in digital media.

Challenges and Considerations

Addressing potential challenges in implementing trust watermarking.

PROMPT
How can a Trust Watermark system be implemented to allow users to easily detect modified content while they are casually reading, listening to, or watching it? What strategies can be used to incorporate trust tags that effectively signal modifications to the user without requiring them to examine the metadata directly?

CHAT GPT
Deploying a Trust Watermark to enable users to easily detect modified content while they are casually consuming it—whether reading, listening, or watching—requires intuitive and accessible signaling mechanisms. Here are some strategies to achieve this:

Visual Indicators for Text and Images

  • Color Coding: Implement color-coded highlights or borders around text or images to indicate their trust status. For example, green could signify unaltered content, yellow for content altered by the source, red for third-party alterations, and blue for AI-generated modifications. These visual cues would immediately inform users of the content's authenticity without needing to look at the metadata.

  • Icons and Badges: Use icons or badges next to content to indicate its trust status. A checkmark icon could denote original content, an edit icon for source modifications, a warning triangle for third-party alterations, and an AI icon for AI-generated content. These icons can be integrated seamlessly into the user interface.

  • Overlay Notifications: Display small, non-intrusive overlay notifications or tooltips when a user hovers over or clicks on content. These overlays can provide a brief description of the content's trust status, offering additional information without disrupting the reading experience.

Auditory Cues for Audio Content

  • Audio Signals: Embed subtle auditory signals or tones within audio content to indicate its trust status. For instance, a specific tone could play at the beginning of a segment to signify that it has been modified or generated by AI. These signals should be designed to be noticeable yet not disruptive to the listening experience.

  • Voice Annotations: Integrate brief voice annotations at the start of audio clips or segments to indicate their trust status. A voice prompt could say, "This segment has been verified as original," or "This segment has been altered by AI," providing clear information to the listener.

Visual and Auditory Indicators for Video Content

  • Watermark Overlays: Place watermark overlays directly within the video content. These overlays could appear as small, semi-transparent text in a corner of the video, indicating the trust status of the segment currently being viewed. For example, "Original," "Source Modified," "Third-Party Modified," or "AI Generated."

  • Color Bars or Frames: Use color bars or frames at the edges of the video to indicate the trust status. Similar to the text and image strategy, green bars could denote unaltered content, yellow for source modifications, red for third-party alterations, and blue for AI modifications.

  • On-Screen Icons: Display on-screen icons or badges at the beginning of video segments or periodically throughout the video to indicate the trust status. These icons should be unobtrusive but easily recognizable.

User Interaction and Notifications

  • Pop-Up Alerts: Provide pop-up alerts or notifications when the user starts consuming content that has been modified. These alerts can give a brief overview of the content's trust status and offer more details if the user wants to delve deeper.
    *Note: The author of this essay does not like this idea. 

  • Interactive Trust Indicators: Allow users to interact with trust indicators to get more information. For example, clicking on an icon or badge could bring up a detailed explanation of the content's modification history and trust status.

Integration with Platforms and Devices

  • Browser Extensions: Develop browser extensions that automatically analyze and display trust indicators for web content. These extensions can add visual icons or color coding to search results, social media posts, and other online content.

  • Mobile and Desktop Apps: Integrate trust indicators into mobile and desktop applications, ensuring that users can see the trust status of content across different devices and platforms. Apps can display visual or auditory cues depending on the type of content being consumed.

By implementing these strategies, Trust Watermarks can be deployed in a way that makes it easy for users to detect modified content while casually reading, listening to, or watching it. These methods ensure that Trust Signals are clear, intuitive, and seamlessly integrated into the user experience, enhancing transparency and trust without requiring users to look at metadata directly.

PROMPT
What mechanisms or protocols would be available to manage these Trust Signals that would be tagged in the content? How could these be protected from centralized control and bad actors?

CHAT GPT
Managing Trust Signals in content requires robust mechanisms and protocols to ensure authenticity and integrity. To protect these signals from centralized control and bad actors, a combination of decentralized technologies, such as blockchain, and stringent verification processes must be employed.

Mechanisms and Protocols for Managing Trust Signals

  • Distributed Ledger Technology (Blockchain)

    • Immutable Records: Blockchain can be used to create an immutable record of Trust Signals, ensuring that once a trust tag is added, it cannot be altered or removed. Each modification or creation of content is logged on the blockchain, providing a transparent and tamper-proof history.

    • Decentralization: By distributing the ledger across multiple nodes, blockchain eliminates the need for a centralized authority, reducing the risk of manipulation by any single entity.

  • Cryptographic Techniques

    • Digital Signatures: Content can be digitally signed using cryptographic keys. The signature verifies the content's origin and integrity, allowing users to confirm that the content has not been altered since it was signed.

    • Hash Functions: Cryptographic hash functions can create a unique fingerprint for each piece of content. Any change in the content results in a different hash, making it easy to detect alterations.

  • Public Key Infrastructure (PKI)

    • Certificate Authorities: PKI can be used to manage digital certificates that authenticate the identity of content creators and validators. Trusted certificate authorities issue and manage these certificates, ensuring that only verified entities can sign content.

    • Secure Communication: PKI also facilitates secure communication channels for transmitting content and Trust Signals, preventing interception and tampering.

  • Consensus Mechanisms

    • Proof of Work (PoW) / Proof of Stake (PoS): These mechanisms can be used to validate transactions and Trust Signals on a blockchain. They ensure that only legitimate changes are recorded, and they provide a decentralized method of achieving consensus.

  • Solid Framework by Tim Berners-Lee/Inrupt

    • Tim Berners-Lee/Inrupt Solid framework could potentially offer an innovative approach to managing Trust Signals in digital content. Solid is a decentralized platform that aims to give users control over their data by storing it in personal online data stores (Pods). These Pods can be accessed and managed independently, allowing users to maintain ownership and privacy of their data.

    • Decentralized Data Control: Solid’s architecture could be leveraged to decentralize the management of Trust  Signals. Instead of relying on centralized servers, Trust Signals could be stored in individual Pods, controlled by content creators and users. This approach aligns with the principles of data sovereignty and decentralization.

    • Interoperability: Solid emphasizes interoperability, ensuring that data can be shared and accessed across different applications and services seamlessly. This characteristic could facilitate the integration of Trust Signals across various platforms, making it easier for users to verify content authenticity regardless of the service they are using.

    • Enhanced Privacy: By giving users control over their data, Solid enhances privacy and reduces the risk of unauthorized access. Trust Signals managed through Solid would benefit from these privacy protections, ensuring that only authorized parties can modify or access the trust tags.

Protecting Trust Signals from Centralized Control and Bad Actors

  • Decentralized Governance

    • Community Oversight: Establishing a decentralized governance model allows the community of users, developers, and stakeholders to oversee the system. Decisions about standards, updates, and management are made collectively, reducing the risk of centralization.

    • Distributed Nodes: Ensuring that the infrastructure is distributed across many nodes prevents any single entity from gaining control over the Trust Signal management system.

  • Transparent Protocols

    • Open Standards: Developing and using open standards for Trust Signal management ensures transparency. Anyone can review and verify the protocols, fostering trust and reducing the likelihood of malicious modifications.

    • Audit Trails: Maintaining detailed audit trails for all changes and additions to Trust Signals allows for continuous monitoring and accountability. Regular audits by independent parties can help detect and prevent malicious activities.

  • Resilience to Attacks

    • Redundancy: Implementing redundant systems and backups ensures that even if some nodes or components are compromised, the overall system remains functional and secure.

    • Tamper Detection: Advanced tamper detection mechanisms can identify and respond to unauthorized attempts to alter Trust Signals. Alerts and automatic responses can mitigate potential damage.

  • Incentive Structures

    • Rewards for Participation: Incentivizing participation in the decentralized network, such as through token rewards, encourages a broad and active community. This reduces the risk of control by a small group and promotes a diverse and robust network.

    • Penalties for Malfeasance: Implementing penalties for malicious behavior, such as loss of tokens or access, discourages bad actors from attempting to compromise the system.

Managing Trust Signals in digital content requires a combination of advanced technologies and decentralized approaches. By leveraging blockchain, cryptographic techniques, PKI, consensus mechanisms, and potentially Tim Berners-Lee’s/Inrupt’s Solid framework, we can create a robust and secure system for Trust Signal management. Protecting this system from centralized control and bad actors involves decentralized governance, transparent protocols, resilience strategies, and incentive structures. Together, these measures ensure the integrity, reliability, and trustworthiness of digital content in a secure and decentralized manner.

Conclusion

And this wraps up my conversation with Chat GPT. None of the answers were edited, but I went through endless editing of my prompts to be able to tune the answers to my thoughts. I think it is important to note that I am neither a coder, technologist, nor academic which may be obvious in my line of questioning, but I do pride myself on being curious. And I will admit that my extreme curiosity begs the question of why such a standard does not exist as we enter into an era of manufactured reality that will be impossible to detect by the naked eye or even our brains. It is important to note that this recommended standard is not only to detect AI, but to allow the tagging of original content versus edited versus third-party commentary as well as AI modifications.

When we search for content on the web, through search engines, messaging apps, social media, or through generative AI, the most rudimentary filter I could imagine would be the ability to filter out non-original or modified content. I can’t imagine Google, Microsoft, Apple, Facebook, X, etc. would ever do this on their own accord as there is too much money to be made in obfuscation but there are precedents of standards adopted even without the initial universal support of industry and government. 

Consider the adoption of the Imperial Standard and Metric measurement systems and how many apple carts must have been turned over in the process, yet here we are. I offer the concept of Trust Watermarking as a complete outsider with no skin in the game and no claim to being proven right. My interest is to start a conversation that may yield an update on plans already in place or possibly inspire consideration by people who could put wheels in motion to make this type of change happen. I have no interest in leading this charge as this is well beyond my mental capacity and availability of time but I know there are countless brilliant individuals out there from all walks of life who could help drive these ideas into reality and more importantly find ways to collaborate with each other, governmental agencies, and industries to create a universal standard to support authenticity in the connection.

Unlike measurement standards or encoding standards, a Trust Watermark standard or something like it is not only for the consistency and betterment of industry. A Trust Watermark standard could also help embed more trust within the connection to engage not only titans of industry and builders of technology but to engage people from all walks of life into the solution.

I believe Generative AI could play a role in amplifying our curiosity on the subject of Trust Watermarking and possibly bring more people into the fold of building out the tools as well. What is possible when solutions come from the community level over the boardroom level? I will be the first to admit I am out of my depth on an incredibly complicated subject and I am certain I have missed many key points in my inquiry. But I believe people much smarter than I can get lost in the weeds of problems especially when they are wrapped in the barbed wire of bureaucracy and shareholder influence. 

It is my opinion that something as important and challenging as the concept of Trust Watermarking should come from the bottom up and not the top down. At least this is what I am hoping for.

Discussion about this podcast

The Emergence
The Emergence
Welcome to The Emergence. A podcast of essays, commentary, and conversations on the failed state of a highly centralized and manipulative web and ideas on how to reverse this course to create a future web where the user is in control outside the influence of outside interests.
Whether it is through the lens of technology, politics, ideologies, philosophies, social orders, or interpersonal relationships, The Emergence will dig deep to find answers to inspire the building of a connection more representative of the user.
Contact: TheEmergenceio@gmail.com