Going Forward with Archivy - Devlog
I have been working on a new knowledge management tool named Archivy since
June. You can take a look at what it’s like in the gif below:
It allows people to index their knowledge and search it efficiently with Elasticsearch. You can also save links you found interesting locally and protect yourself from link rot 1. You can read more about what it does on the README. This post is about how archivy grew and the progress we’ve made, changes we want to add and some more technical details. You can read more about what makes it useful here.
On June 4, I began working on a knowledge management system that you would run as a webapp locally. It would act as a digital extension of your brain that would be able to search with Elasticsearch, a powerful search engine I had my eyes on. At this time I already wanted to focus on the idea that users could easily sync their identity from third-party services onto archivy. Archivy is and was meant to also act as a way to save the content that is held out on third-party services like HN, Reddit, Pocket, etc…This basically allows you to have a central node where you can search content from your entire digital presence.
Armed with the prototype of an idea and a semi-functional app, I posted a link to the repo on Hacker News on the 19th of August 2020 and to a few subreddits like
r/DataHoarder, etc… I got TONS of useful feedback and it got me attention to the project that allowed me to iterate much faster and go further with other contributors.
I’d like to thank all the people who helped Archivy go forward and accredit the highlights of the major features that were worked on and merged into archivy since then:
Packaging archivy on pypi to make install much easier.
An API to enable user scripting built with the gamechanging help of clemux
Core addition of Pandoc Markdown that added many features to the markdown parser.
Integration of a login system that allows people to use archivy from a remote machine
A very recent docs website
Most notably, a plugin system that allows people to very easily write python packages that wrap around archivy. One of them I wrote allows you to basically download your Hacker News upvoted / favorited posts. This is a stepping stone in trying to build an ecosystem for Archivy and making it your “data stronghold”, a place where you can save third-party information, like these HN posts and comments.
Other notable additions included a sidebar, a new dark theme, and refactors / improvements of the codebase’s logic. You can view more here.
We’ve managed to build a community and I look forward to see how Archivy will become even better in the months to come.