combat LLM spam by building a web of trust
Tangled has introduced a vouching system to help combat low-quality contributions generated by LLMs by enabling users to signal trust or distrust in others. Users can vouch for or denounce contributors, with visual indicators displayed next to profiles based on direct or indirect connections within a user's trusted circle. The system currently shows warning labels without restricting access, aiming to reduce review burdens on maintainers while avoiding punitive measures.
- ▪Tangled users can now vouch for or denounce others with a green or red shield icon displayed on profiles.
- ▪Vouching includes an optional reason and is visible only to users and their trusted circles.
- ▪There are no access consequences for being denounced—only a visible warning label appears in the UI.
- ▪Vouch records are stored on a user's PDS and aggregated by the Tangled appview at interaction points like issues and PRs.
- ▪Future plans include vouch decay over time and linking evidence such as merged PRs to vouch records.
Opening excerpt (first ~120 words) tap to expand
Tangled now has native support for vouching! You can vouch or denounce users that you interact with. Vouched users will have a green shield icon beside their profile pictures, and denounced users will have a red one. You can use this to inform decisions about an interaction. You can also see the vouch/denounce decisions made by your circle. why vouch?# Vouching serves as a signal of trust to your circle. The bar to submit code to a project has never been lower thanks to LLM based tooling. LLM tools are really good at creating "uncanny valley" submissions. Code that looks correct but is subtly wrong. The onus is on maintainers to now take the time to review such submissions.
…
Excerpt limited to ~120 words for fair-use compliance. The full article is at Lobsters.