In What Sense are AI Tools Useful?


July 31, 2025

I think it's fair to say the following rule is useful for guiding how we use it: when it comes to known knowledge and tool sets, AI is very good—it can even correct us if we're saying something obviously wrong.

But for unknown knowledge—the kinds of things no one really has answers to—it acts more like transparent glass. It just polishes whatever it's been given, often by being agreeable. So if a human feeds it nonsense, it produces good-looking nonsense. But if someone provides something  insightful, it returns as a neater-looking insight. This is the usage pattern I have observed in 2025 (I will update this as time and technology evolve). So far, it seems that artificial intelligence tools don't replace theoretical physicists, but serve as springboards for them to efficiently reach a higher level of understanding.

Here are three guiding rules I think are useful:

Use language models only for the following three purposes:

1) Updating knowledge and skills – For example, conducting literature reviews of papers, or refreshing concepts you’ve already learned but may need to revisit. But you must be able to cross-check every step of its reasoning.

2) Encyclopedic and manual labor help while creating – Use it for routine calculations after you've already learned how to do them and can do them, or to get unstuck on specific calculations or concepts you've struggled with for a reasonable amount of time. Again, you must be able to cross-check its reasoning (there should be no black boxes in your understanding).

I think there are three moral questions that motivate these two rules, or could motivate more, depending on context:
1) Am I making sure that scientific integrity are held intact (—these apply regardless of if AI is used or not and correspond to questions such as: are the observations i made accurate, are the reasoning rigorous and have i made clear the assumptions i am making and the limitations that exist, and have i attributed credit to other people for the ideas on which I am building upon and i am held accountable for my work etc?)
2) Am I using only the resources that are widely accesible to my peers too?
3) Is my voice and thought process conveyed to the audience? Science is as much about the results as it is about a glimpse into how the scientist thought, reasoned, and approached a problem. It's a literary form in that way—so we want to make sure the audience is reading a human thinker—nobody wants to read an AI persona.
4) Am I unfairly disadvantaging myself by not using a tool that is available to everyone? i.e., am I insisting on going on horseback when everyone has access to cars and flights?

The first three are outward-directed moral questions, while the last one is a self-preservation moral question.

I think if the answer to the first question is yes, and if the answer to the second question is yes, and the answer to the third question is yes, then one is already on firm moral grounds, socially, regardless of AI use. A yes or no to the last question does not, in that case, affect one's moral standing in academic writing. If it's not, one just becomes slower and less efficient when they don't have to be (like choosing to ride a horse, for example). But if the answer to it is yes, while the answer to the former three is also yes, then one is still in firm moral grounding without losing efficiency.

I leave you with two thought experiment: 

1) let's say three mathematicians, A, B, and C, independently come up with clever proofs for a very difficult math problem, and they work. But later, it is found that A used a very advanced, superspecialized AI system to assist with the proof, which is only available to a very narrow set of people due to its expense. Mathematician B came up with the proof, but it was found that he had many brainstorming sessions with ChatGPT, a tool assumed to be uniformly available to the masses, and used it to command it to do calculations for him once he decided what calculation to do, and mathematician C fully worked out the calculation with pencil and paper. In the scenario, it seems to me that B and C are both on firm moral grounds, even though B did not disclose, since he used something that C could have used but chose not to. Only A has weak moral grounding. i think the reason why B is in firm moral grounding is because of the intent of the writing: if this was an exam or olympiad then obviously only C is moral (in this case B choosing to use chat gpt would be like an athlete choosing a car to run a 500 meter running race; the car is available toe veryone yes, but not moral use here). But if the goal is just knowledge contribution, then B and C seem fine (in this case, C choosing not to use it is like a person choosing to walk to the grocery store even when he has a car to carry the groceries back home comfortably)

2) If we suppose we have two scientists, A and B, who, before AI, were such that A was analytically strong, disciplined, but less creative, whereas B was creative, less analytic, and less disciplined. Then, the scientist + AI combination can become much more analytical and creative than scientist + AI; it's like the transhuman combination amplifies certain effects/skills while not others. In this case, I don't think we can say B has an unfair advantage, for the same reason we cannot say A had an unfair advantage without the AI.



Share
Tools
Translate to