Hacker News new | past | comments | ask | show | jobs | submit login
1 in 4 U.S. Teachers Say AI Tools Do More Harm Than Good in K-12 Education (pewresearch.org)
54 points by sarimkx 8 months ago | hide | past | favorite | 78 comments



At the moment, the real killer use-case for the current generation of AI agents (Claude, Gemini, ChatGPT) is when you know you don't know something AND you have a specific task to get done.

Some examples in past week:

  1. Claude: "Why is this code failing typechecking in typescript" -> gives me detailed reasons why, teaches me about some things I didn't know about, and solves the problem for me while uplevelling my understanding,

  2. Gemini: "What kind of contractor do I need to install an oven with a downdraft?" -> Tells me I need an appliance installer, and then links me to https://www.thumbtack.com/ca/san-francisco/appliance-installers and I select the 1st one.
Applying this to education, I think that education needs to pivot explicitly to "solving problems" and not education for educations sake. e.g .The student needs to be engaged with a problem and reaches for AI as a tool to solve it and, as a result, up level their understanding.

If a huge mechanism for assessing folks in education is "write an essay on this" and the teacher then "grades the essay output by the student," that's almost the perfect task where AI can do both ends. Which is pretty much a sign that assessments such as this have low educational impact.


I hate listening to tech people talk about education like it's some problem to be solved. Using a system that you know is wrong 10% of the time for education is a terrible idea. We're trying to teach things like basic literacy and cultural contex. School does not exist for the sole reason of pumping of good little tech workers. Some things are actually hard to measure and AI being able to put out a convincing looking essay doesn't mean the act of writing essays has no value for the student. The goal was not the production of an essay.


I am in school currently and have had students sitting next to me use ChatGPT to help with some in class quizzes, we are allowed to use other resources, so it wasn't cheating. For some of the questions, ChatGPT was flat wrong when analyzing code. One specific question was asking where errors were in a block of code by line number, ChatGPT gave the wrong line number and wrong number of lines on top of that. The student actually knew how to answer the question, but didn't want to think. In a different class, a prof said she knew some students were using ChatGPT because the errors some students made were the same errors ChatGPT made and they were kind of dumb mistakes if you knew what was going on.

I don't mean to say your experience isn't possible nor a fluke. What I mean to say is, if you don't know something, ChatGPT shouldn't be your only resource, you won't know when it's wrong.

edit: words, spelling


> that's almost the perfect task where AI can do both ends. [..] Which is pretty much a sign that assessments such as this have low educational impact.

This seems like a bit too far of a leap.

When teaching math, we want students to be able to prove novel theorems. But all the open problems in mathematics are really hard, so teachers often start by getting students to practice on easier theorems that have already been proven.

In this context, something like "prove the Pythagorean Theorem" is a useful exercise and valuable assessment! You just need to make sure the student actually does it instead of copying the answer from the Internet (directly or via ChatGPT).


> Applying this to education, I think that education needs to pivot explicitly to "solving problems" and not education for educations sake. e.g .The student needs to be engaged with a problem and reaches for AI as a tool to solve it and, as a result, up level their understanding.

It seems you are thinking of project-based learning. That is a great way to learn, but it only works when you have a certain level of baseline knowledge within the "zone of proximal development" to what the task requires. And even then, it does not replace other kinds of learning for developing expertise.

Think of asking a 5th grader to implement the FFT within a programming langauge of their choice. They wouldn't understand the problem, wouldn't know where to begin, and would probably learn very little. But they could still ask Claude or ChatGPT for an answer and there's a good chance it would be pretty close.

For a curious and motivated student, AI is an amplifier---similar to how you were using Claude. But most students just want to finish the task, in which case the more powerful the AI, the less the student needs to know to get a result. These students are not using the AI as a tutor or to fill in gaps in their knowledge. They are just copy-pasting the assignment into the prompt and pasting the output as the answer. That is not learning, and it can be done with projects too.

> If a huge mechanism for assessing folks in education is "write an essay on this" and the teacher then "grades the essay output by the student," that's almost the perfect task where AI can do both ends. Which is pretty much a sign that assessments such as this have low educational impact.

Not at all. It's a sign that AI can do the task.


> If a huge mechanism for assessing folks in education is "write an essay on this" and the teacher then "grades the essay output by the student," that's almost the perfect task where AI can do both ends. Which is pretty much a sign that assessments such as this have low educational impact.

Disagree here. AI can be (theoretically) trained to do any tasks with which it has an IO interface to perform the task. The same goes for humans, they can't inherently prove they "know" the information, they have to show it, via an IO interface (their hands/voice).

As such, any metric for proving human knowledge could be gamed by AI, assuming an environment where AI could be used (i.e. at home). The only places where AI can be controlled is in the classroom, like during exams or in-person activities.

So to conclude activity X has "low impact" because AI can do it, implies that all education at home has low impact, because AI could do it (through a human performing verbatim what it instructs, ignoring any explanation the AI might give).


The problem is that you have a sense of what a correct and what an incorrect answer is.

When you go and ask it for some assembly code ChatGPT will happily mix Intel and ARM instructions.

And if you don't have a frame of reference you can spend quite a big amount of time figuring out what is wrong.


They gave us the go-ahead at work last week so I tried to use ChatGPT today on a task. It failed, and failed, and failed, over and over again. I kept prompting it, telling it where it was wrong, and why it was wrong, and then it would say something trite like, "Got it! Let's update the logic to reflect $THING_I_SAID_TO_IT..." and then it would just regurgitate the exact same broken code over and over again.

It was an absolute waste of my time. I quite like it when I need to get some piece of weird information out of it about the Python standard library or whatever but whenever I try to get it to do anything useful with code it fumbles the bag so hard.


You have to start from scratch. The training data has many examples of almost directly copied blocks of text and code and the model will tend to just repeat itself after several iterations.


An interaction like this led me to cancel my chatgpt subscription.


AI is unbelievably awesome for education. We just need to raise the bar. If you only use it to do the normal work (essays), it’s a waste and maybe bad.

Now, it is so much easier to teach programming… data analysis… design… all things that k12 is normally terrible at teaching.


I love how HN is just a cheerleader for AI btw. Like the vast majority of posts are all the same. “AI is unbelievably awesome”. “All dangers are overblown and silly.” “We’ve always had X downside, it’s nothing new” “but Web3 sucks in every possible way.”


Also incredible levels of Dunning Kruger effect for fields outside of programming like education.


AI has spent the last few months being useful to lots of people across many industries, even if only moderately so. Web3 has spent the decade being useful to approximately no one.


Hey, what your saying about Web3. I'm easily amused by https://www.web3isgoinggreat.com/


Damn. Only 1 in 4 U.S. Teachers Say AI Tools Do More Harm Than Good in K-12 Education? Better than expected


The other way to look at it is that 3 out of 4 teachers say AI tools do more good than harm. That's not actually what the survey said, they just have a big 35% "unsure" category, but at this point being unsure is the same as supporting AI. "if you're not against us, you're with us"


At most you can say is 3 out of 4 are neutral. Excluding my the middle is a basic logical fallacy.


I don't think law of the excluded middle applies here.


What does the other 3 teachers say?


"I'm just a LLM and I'm still learning. I'm afraid I cannot give an opinion on a potentially divisive topic like education, because education is an important part of development of a properly adjusted human being whilst my probabilistic hallucinations are paid for with the taxpayers money which has a history of being a medium of transfer of value and values are an important device in the process of creation of a cohesive society where temperature gradient of knowledge corpora used to construct recombined outputs leads to non-aligned conclusions that may or may not cause distress..."


Seems like LLMs will cause the average student to learn less, while the motivated students can learn much more, and more quickly. Personally I have found LLMs very useful for teaching myself new things, but I could imagine just using them to solve my homework faster without comprehending it, if I were a student who didn't care very much about the material.


One place I think it can be useful is in shortening the feedback loop for learning outside the classroom.

As an example (not K-12!) my wife is learning Japanese. She's taking a class and the teacher gives her homework, but she also studies on her own, including doing exercises in the textbooks beyond the ones the teacher assigns.

Sometimes she'll answer a question, look it up in the answer key, and find she got it wrong. Previously, she had to wait until the next class and find time to ask the teacher to help her understand her mistake.

But I showed her how to use the ChatGPT phone app to get an explanation right away. She doesn't even have to type it in: just take a picture of the quiz (which is all in Japanese, not English) and ask, "Why is the answer to question 7 'C' instead of 'D'?" And she gets back a detailed explanation of the differences between the two answers and why "C" is a better choice, complete with fresh example sentences and breakdowns of what they mean.

Hallucinations are a risk: she could get back a bogus explanation. So far, that hasn't happened, though. She isn't a total beginner and knows enough to be able to tell whether the explanation makes sense. Each time, her reaction has been something like, "Oh, right, I forgot about that," not, "What? Never heard of that." She also tried it on a couple of questions she'd run by her teacher previously, and got back roughly the same explanations the teacher had given her.

In theory she could just ask it for the answer rather than for an explanation of why her answer was wrong, and it would happily give her a usually-correct response. In her case, that's not a problem because her only goal is to actually learn the language. Cheating would serve no purpose at all. But for a K-12 student who mostly cares about getting a good grade, obviously that's going to be a big temptation.


The shift is coming. Just like the internet changed how kids access information, the old way is out.

My wife is an educator and we have long discussions about this. I believe lessons need to be upgraded to require AI and how to interact and prompt it for deeper meaning of more than just "Find the value of the hypotenuse". Something in the form of controlled modules?

Testing can stay exactly the same, i.e. no computers/ai, but learning has to be completely shifted.


Given the usage of LLMs to rank answers to questions, I think testing could shift as well. Having an LLM (operating with good faith prompt, eg trying to genuinely discover a level of understanding from a student) have a back and forth with a student to evaluate a level of comprehension of a topic could be more supportive of students than traditional testing.

Validation of the LLM to do such a task reliably would be large undertaking, but seems like something that might be possible in the future.


The general issue I have with LLMs in testing is scoring/grading. Since the experience will be different for each test, someone or something will have to grade it individually by reading and "understanding their understanding".

It leaves too much to subjectivity.


As someone working on an AI tool for education, I think it's going to be one of the hardest industries to get right, but also the most important.

- It's going to be tricky for AI tutors to make an impact. Teachers are better at inspiring students to do better than computers ever will be, but for those already motivated, AI will provide a great study tool.

- Students have access to LLMs as well now, so any writing assessment will have to be timed and closely monitored, or in-class, otherwise students can easily cheat.

- Teacher-facing tools will come before student-facing tools because of the safety aspect.

We're building an AI grading assistant for teachers (https://automark.io) that helps them provide more writing feedback to students. Papers are extremely time consuming to grade and give feedback on, which is detrimental to both the teacher and the student. Giving students unlimited practice essays before the scored essay is one area where AI will really move the needle.

Grading is also one of the main shortcomings of online courses, so I'm guessing we'll see more in that area in the future where high quality courses can scale to 10k-100k+ students.


>We're building an AI grading assistant for teachers

So students will be generating papers with AI and teachers grading them with AI? Maybe the real problem is education and teaching isn't innovating as fast as tech

Can't remember where but saw a comic not too long ago when an employee uses AI to generate a long email and the recipient uses AI to summarize it



Yup, that's the one


> Can't remember where but saw a comic not too long ago when an employee uses AI to generate a long email and the recipient uses AI to summarize it

Bruh. That was a GOOGLE PRESENTATION from like last year.

Of course, since it was more than two weeks ago, it’s been memory holed, but it was Google IO 2023 or something when demonstrating Bard in GSuite.


What a clickbait title.

According to the article:

* 25% more harm than good

* 32% equal mix harm and good

* 35% are not sure

* 6% more good than harm

(Yes, those do not add to 100%, but 98% is close enough.)

I came here looking for a fight, but this article does not back up any use of AI in teaching.

I hope that commentators will write about SPECIFIC ideas they have for AI in education (pro or con).

Right now we are in a situation where (many of) the traditional ways we ask students to demonstrate understanding can be done by a computer.

One important worry I have is that most of the best maths students I know are good at arithmetic/mental math even though we have had replacements for that since the 70s (at least). I think that a deep understanding of the foundations allows for a better ability to abstract. If we roll over everything with an AI, are students really going develop the ability to plan (think abstractly)?

I'm not against using AI's to accomplish something, but I think there's an important piece of intellectual development that comes from fluency (without looking everything up). I know one colleague who gets great mileage out of Copilot, great understanding and great productivity. Unfortunately, there's others who are on the "faking it" end of the spectrum - definitely net negatives even when they manage to produce working code.

If you're arguing that LLM's are the future of programming, I won't disagree, but please spend a few sentences imagining how people trained in that world (not the one you trained in) might see things in very very different ways.


> most of the best maths students I know are good at arithmetic/mental math even though we have had replacements for that since the 70s (at least).

Well, consider (for example) the SAT. It used to have math sections both with and without a calculator. Then COVID hit, and e.g. MIT discontinued SAT altogether - now the SAT is back, but all digital and the math section is all-calculator. To me this suggests that computers have integrated into society to the point where one's ability without them is seen as irrelevant. This isn't to say that there aren't holdouts who still advocate traditional pen-and-paper testing, just that they are no longer mainstream. Similarly, I would say for programming, if the questions you are asking to determine if they understand "the foundations of programming" can be easily answered with Copilot or ChatGPT, they are probably bad questions.

As far as teaching, there has always been strong evidence that people can learn math effectively just by playing with their calculators. http://ti-researchlibrary.com/Lists/TI%20Education%20Technol... I certainly learned a few things that way. It is the same way with Copilot - you can actually learn a lot of programming just by playing with it and seeing what "real code" looks like. When you mention fluency, I think you forget how it works - first you write things, then you get used to it and become more fluent. Copilot assists greatly with that first step of learning.

Now it is true, some teachers aren't really good with the Socratic method and other techniques of fostering self-guided learning, but if it is a binary "full access / no access" I would argue full access is better, like how students can apparently use their phones to goof off in class all day without any repercussions (https://news.ycombinator.com/item?id=40495673). There are people who will misuse the access but it is not clear if they would learn anything even without access - it could be that they would just find other ways to ignore the teacher. And meanwhile the rest will benefit significantly - they can look up concepts, ask ChatGPT questions, etc.

And as far as "deep understanding" and the "ability to think and plan abstractly", barely anybody has that, that is not taught in schools. I mean sure there is the watered-down definition of "solve word problems and write essays" or whatever passes for understanding these days, but most of the kids coming out of school are nowhere near the next Einstein, even if they got straight A's. edit: maybe the ability to plan is taught in military training, it's certainly not part of a regular school curriculum.


The article comes off as mostly clickbait TBH. I live in San Francisco, our school system here stinks.

I'd say about 25% of the teachers at my kid's school are incompetent so that kinda lines up with 1:4 saying "technology bad". It took the pandemic for the school system to come 20 years into the future technology wise and they're still at least 15 years behind.

I honestly wouldn't trust a teacher to tell me what the future should be or if something is good or bad...


My memory is poor, but if I remember anything, it was the apprehension of the establishment over students using Wikipedia as a reference. How can we trust the source when "anybody" can make edits?

I am certainly not saying Wikipedia is scripture; however, those fears -- like so many tech fears before and after -- have turned out to be baseless.

Now if you'll excuse me, I must go and dust off my Encarta CD-ROM. I want to hear what an Elephant sounded like in 1995.


Everything can potentially have mistakes, therefore we should rely on computers to do all the thinking for us?

I mean, we're training the AI on own work...

One week ago, Google's AI summary told me I had to drink urine to help pass kidney stones.


I’m wary when ai is presumed to be infallible despite its hallucinations; I’m not sure that’s a great way to teach kids yet. However, I went through the US public school system, so I’m well aware of how useless the average US teacher’s opinion is on basically anything. I also really want to see the institution change because right now everything after elementary is a patronizing, glorified daycare/prison for teenagers.


I think there's definitely harm and it doesn't even require teachers to use it. Lazy students will plug anything into chatGPT. Instead of obviously plagiarized unreadable garbage like you used to get, you get coherent AI maybe garbage that's harder to detect. And students not doing the assignment doesn't help them. It's also limiting to require all work to be done in class or oral to try and prevent usage.


As someone who also went through the public school system and who was able to get into an ivy league school off of that education, maybe you should show some more respect to the people doing the work of education for shit pay.


How exactly does your ivy league have any relevance? Anyways, maybe 1 in 7 of my teachers were great people and I appreciate them immensely, but I had so, so many bad teachers. I had several teachers who would throw on vaguely relevant tv every class instead of teaching. Overall, most of them didn’t seem interested in teaching us, nor did they seem very informed outside of the class material. I don’t think they’re terrible people or something, just components of a terrible and underfunded system, but I also don’t trust their opinions on the future of teaching.


It's a quality signal. Not all my teachers were good and the system is underfunded, but most of turn cared about the students and I'm incredibly thankful for the education they gave me. I trust the opinions of people teaching kids everyday a lot more than I trust the average hacker news comment.


Glad you had good teachers man idk.


To me, the most unfortunate thing in this storyline so far is not that people aren’t up to speed on AI, but that students are regularly getting penalized because they are suspected of using AI.

Teachers and professors using “AI detection” tools seem to be causing the most harm at this point.

I don’t know what else kids can do aside from stream the entire process of writing an essay on Twitch.


Educational software has always been an abyss. While in college, my summer internships (early 1980s) were at an educational computer facility. They had a "lab" with a variety of available computers and software for teachers to try out. Virtually all of it was abysmal -- glorified flash cards.

It wasn't much different when my kids were in high school.


Number munchers, Oregon trail, Odell Lake, Rocky’s Boots, LOGO, Carmen San Diego…

My 1980s education technology was kick ass


Encarta Mind Maze, gizmos and gadgets, Treasure Mountain, Zoombinis, lego mindstorms


Super Munchers, Outnumbered, Operation Neptune, Wild Science Arcade, and Mario Teaches Typing

blowmp blowmp Da-da-daah! POW! :)


> Operation Neptune

Oh man, I played the heck out of that game, but somehow I keep forgetting it existed. (Unlike, say, Myst or Wing Commander.) I think I beat it, but if so I'm suspect it might have involved a lot of trial-and-error regarding math questions I wasn't ready for.


Math blaster. Stuck in my head forever


As a parent, I put quite a bit of time into trying to find the huge number of excellent, new edutainment games that must exist since the market’s a ton bigger now and so many kids have access to tablets and such.

I was surprised to find that most of the best ones were remakes of the old ones (Oregon Trail) or weirdly empty and small in scope like the very simplest, early edutainment games (number munchers, say) as in the case of Dragon Box Math. The boom of ambitious, fun, and sometimes weird 90s edutainment the legacy of which I was hoping to find wasn’t there. Really weird.


I think it really depends on how the institution has invested in the stuff. If they have one or more teachers who know and understand what they're doing, or some pretty good software and curriculum already setup it's great.

But if you buy a bunch of computers or whatever and some crap software and don't execute well it's not going to go good.

(Source: like half my family is/was a teacher or school administrator)


AI in education reminds me of that book https://en.wikipedia.org/wiki/The_Diamond_Age

Spoiler alert: seems the author didn’t consider realistic for an AI to do a good job as a general-purpose smart assistant for a child. Yet here we are.


I thought all teenagers would be at 100% adoption of AI chat for homework and projects at this point.


Define harm.

Because without that context, I'd assume "harm" here is mostly qualified as "how I used to teach doesn't work any more". Not that that's invalid criticism, but it's very different from "harms education outcomes"


Unfortunately the Pandora's box has already been opened.., we are going to have to get better at thinking critically and judging evidence if we are to survive into in the sea of LLM generated content.


There’s no guarantee we’ll manage to cope, even socially, with every technology we develop.

I suspect this is one of them. Only expensive private education will be able to avoid regressing in quality. Public schools are already rubbing their hands with glee imagining how many teachers they can lay off or avoid hiring (there’s a staffing crisis) to give half as much money (which will slowly inflate to the same amount) to SV app vendors instead. Zero chance the first attempt at this is gonna be any good, we’re soon going to see something far more damaging than Covid was, for kids’ educations.


Won’t help.

Superhuman means all your efforts to think critically will be overwhelmed. How exactly do most people fact check today? Take the Donald Trump guilty verdict today. Even without AI, who is thinking more critically, the Republicans who distrust the process or Democrats who welcome it?

And your friends will be turned against you by AI influencers if you dare to disagree.


This is a mild view of what’s on the horizon. I’m pretty sure liberal democracy can’t survive in the environment we’re creating. It wasn’t clear it was going to survive iteratively-improving Internet strategies in the first place, and LLMs are like giving a battleship to bad actors and a revolver to anything resembling the truth.


The only thing this shows is that the education system actively resists preparing students to succeed in a modern economy.

This is the modern equivalent of saying "keyboards hurt penmanship education."


>This is the modern equivalent of saying "keyboards hurt education, penmanship education is suffering."

Ironically if I had listened to my teachers more I wouldn't be peck typing as a programmer and my penmanship would be more like cursive than the chicken scratch that I currently write.


I get you're going for the "AI is a tool" thing, but the problem with your analogy is that LLMs hallucinate and many students use them in lieu of doing reading and reasoning about the content.

LLMs are to english class as a car would be to running class. Yes, the tool exists and is useful but it prevents you from getting the intended benefit.


Seems like educators should be adding AI to their curriculum.


How do you teach students how to reason around AI before you teach them how to reason, period? I think the problem is trying to teach the basic logic and philosophy of the fundamentals of subjects. I'm talking how to think mathematically. How to analyze critically. How to argue coherently. AI can be a boon to people who already have the capacity to reason around it, but first we have to reach un-developed brains how to reason.


I absolutely would be. At least a grounding in understanding pitfalls, and using good prompts. My kids have been watching me make a browser game where the design decisions are all mine/ours, but the programming is augmented by GPT-4o. As I go, I explain why I'm using the prompts I am, and how I integrate the code: wary that GPT has gone off-course the further into remembering the origins of a function that it is. The game is not where it is purely because of GPT-4o, but it has been a key component given I barely know the language being used. (They already do game making as part of STEM components at school.)


Depends on the class.


I don't think our existence should be dictated by economics. There are certain skills, creative and analytical, that should be practiced even when there is no economic incentive to do so. I actually think the idea of a chatbot solving your math homework or shitting out an essay is seriously regressive. Like, sure, if you instead train students to coax AI into producing "an answer", they might be able to find work. But they also missed the opportunity to practice certain cognitive skills that would enable them to develop their own ideas. Not everyone can be a specialist (or needs to be). But at the same time, they should not be handed a crutch from the get-to which discourages independence and practice.

That's not to say that there aren't certain repetitive tasks that can benefit from technology. But have you ever had a numerical problem on a calculator? Do you realize how often technology can lead to bad math? It happens a lot. If you blindly follow wolframalpha, you'll get things wrong. And you'll have absolutely no idea why.


So then 3 in 4 say AI tools do more good than harm?


What the teachers really are saying is that they're afraid the AI tools are coming for their jobs, and so the teachers will say anything at all to delay the inevitable.

Around here the school budgets are absolutely massive (in the hundreds of millions of dollars for a village), and if I were getting such free money, I would say anything to keep it flowing.


A solid group were unsure. Just reading Hacker News will show you that there are significant numbers here very enthusiastic about AI and others (probably with overlap) very concerned about where it's going. Just read any Sam Altman thread.

I'd guess that most teachers would welcome AI augmenting their teaching. They're already doing more than they were 1-2 decades ago in terms of dealing with parents and reporting on curriculum (e.g., Seesaw app), at least where I live. AI isn't going to take jobs in childcare, kindergarten and early primary schools any time soon. I also don't think it would do more than augment teaching in the rest of primary school or high school.

Teachers would jump on an AI that helped them summarise daily and weekly activities for broadcast to parents. That must be such a grind doing it now.


Counterpoint opinion: 1 out of 4 K-12 schools do more harm than good. Therefore school bad?


But is it the stupidest quarter or the smartest quarter?


Do 3/4 say it does more good than harm?


Buried the lede. Most teachers are unsure.


Only 1 out of 4?


[flagged]


The article states 6% believe they do more good than harm. I'm not sure where you came up with your headline or why you're reverting to name calling.


If you clicked on the article you'd see that only 6% said they think AI tools do more good than harm, it wasn't a binary choice.


Which implies 3 out of 4 US teachers aren't good teachers. :P


Teachers are pretty low on the people I trust.

As an adult, I saw who became a teacher. It was not our best and brightest.

As an adult, I heard about the stereotype grading done by lazy teachers. "Oh that student is good, I'll give them an A and skim their paper... That student is a goof, they probably got a C since last time they got a C..."

As an adult, I see how terrible academic skills are, and how quickly a 12 year old could be a high school grad. College really should replace high school.

I don't trust their judgement here. I think its most likely laziness under the guise of concern.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: