Hacker News new | past | comments | ask | show | jobs | submit login

Hey there, I'm working on an app that helps guide people new to remote work to using something very close to the style of working you're describing here. If you'd be open for a chat, I'd love to get your feedback about how we could improve based on your experience.

The lessons going into it now come from my experience at an all-remote unicorn, and your consulting based viewpoint would be valuable to hear.




Hi Jason, looks like you are asking about AsyncGo. After reading your Remote Work Hub interview and poking around the docs, I think you have a very viable product for teams that are already highly text-centric. Teams/Slack/etc. are terrible for the exact fit you are aiming at, which I think of as "structured, directed chat". I don't know what your go to market strategy is, so I have no idea if you are aiming at any of these issues I immediately thought of giving a quick once over the aforementioned materials.

Voice-centric. This is very difficult to internalize for those of us here who cut our working lives upon text. We literally live in a context cocooned in text: email, chat, complex application UI's, web pages, editors, calendars, and terminals. But we're vastly outnumbered by most people in the world who get activities done by interacting largely by voice or near-proxies. Whether with peers, direct reports, managers, stakeholders, assistants, or any other relationship, the majority of interactions are transacted over voice, snippets of text so brief they might as well be voice, sometimes highly-structured apps (like truck dispatch apps) that might as well be snippets of text, pictures (still or moving), and rarest of all the kind of text we deal with in our industry.

This is text that sits in unstructured form until it is internalized and cognitively, actively modeled. Even highly-structured code with strict AST's counts, because unless I've read the code before, it comes at me as a blob until I've applied cognitive effort comprehending it. If it wasn't this way, the majority of advertisements would be in long-form text. There is a highly specialized area of marketing that does do exactly that, but the overwhelming majority of advertising functions on this predominantly-voice ingestion pattern.

If your product market fit is outside of the group of people who are used to transacting in text (and even then, even inside tech companies, there are tons of people who still vastly prefer voice, even modulo the social dominance hues using voice to convey requests brings to the picture), then I don't know how to solve that problem without Uber-scale buckets of money.

From recording to synthesized structure. This is the passive inscribing act going through the converting process to active internalization gap to produce decisions and results all tools in this genre aim for. You cannot make people cognitively apply themselves to taking raw information, internalize it, and then offer synthesis. Watch a lot of meetings for the following: how many people are regurgitating the recorded/known data or only first-order consequences in their own words (thereby typically cementing their understanding), and how many summarize into choices, tradeoffs, and synthesize a proposed solution that take into account second- or even third-order effects? A great number of tools in this space fall into the recording trap. "Here, I enabled you to record this phone conversation, that web meeting, whatsit email. Now go make something of it."

We're still missing a data auto-editorial function not just in this toolspace but in general within the civilization. The Big Hairy problem space isn't recording, as much as accurate, precise, fast synthesis. We have too much recording as it is. We lack correctly finding the valuable parts of the recordings. As much as people like to dump on Palantir here, they're tackling that synthesis problem head-on; they're basically indiscriminately spraying a firehose of money at the problem, and they're chipping away at it through a lot of brute-force (which I suspect is the only way initially). This is why you see people asking each other over email for the same information they just emailed each other about last month instead of searching the email archives. Associative importance-based memory beats search beats raw data.

What is interesting to me about all this is we aren't even widely supporting interrupt-driven annotation and organizing, even though our biological hardware is optimized for that modality. Vision keying on motion, audio keying on differentials breaching background noise (and said background is cognitively processed, not just a decibel threshold), pattern recognition, and so on: our hardware platform is primed for an interrupt-driven existence, yet our SOTA computer interactions in the workplace are primarily batch-based. It is no wonder Instagram is a smash success, and Outlook having been on the market for a magnitude longer is "just" a square office app, despite one user of the latter conveying far more information in a day than in a week on the former.

To bring this into the concrete, for example we can attach video to a topic, so an even better interrupt-friendly interaction is being able to comment directly into the video, either by typing or talking into TTS/video-over-cam, and have that emerge into the topic alongside the video. That's half way there to summarizing with low effort by the users. As much as I like Markdown myself, unless I'm working with a developer-centric organizational culture, I point teams towards rich text editors (which are free to encode into Markdown). Organizing topics will become an issue, especially in cross-functional teams who are nearly guaranteed to have differing taxonomies and even ontologies. Coercing them all into a One Tag Cloud to Rule Them All seems to discourage adoption rates in my limited experience, which I suspect is due to some kind of conceptualization/modeling impedance mismatch between teams. With cheap storage and processing these days, I'd like to see the results of interrupt-driven, search-history-directed, team-oriented-categorization organizing. Build the associative net based upon what people say to remember about a topic, what they search for and linger upon the longest after apparently pausing their search, and what ML-identified commonalities they share with other team mates (relationships pulled from a directory service).

That's my off-the-cuff reaction.


What a great perspective, thanks for sharing. Going to give it some thought before replying further.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: