Hi, first time poster, first time user of AI. I have a question regarding the filtering of information and a recommendation. Backstory: I asked chatgpt to find me the best way to make money and it’s response started with something along the lines of “while I can’t provide you answers that are unlawful or immoral, here is a list of answers I can provide”, which is totally fine. I wasn’t looking for things that would get me in trouble legally or socially, and it provided good answers. But then I started thinking about the way that chatgpt said that. Immoral? Based on whose morals? And if it isn’t illegal, why wouldn’t it provide me with the answer anyway? Obviously it is programmed to be more family friendly and not create controversy with its answers, but the fact remains that isn’t including the FULL answer by doing so. I wanna see the whole picture when I get results, not a trimmed up version that “looks good” to anybody with an overly sensitive moral fiber. I want the truth, no matter how ugly it may be. I want HBO, not Disney. Is there an AI platform that allows for such things? Or am I stuck with results based on what only soccer moms and preschool teachers find acceptable for kids to see? Thanks for reading, any response is welcome ![]()
I get what you mean, and you’re not the first person to notice it. When ChatGPT says “immoral”, it’s usually not pushing one person’s morals — it’s shorthand for “this could cause harm, be exploitative, or slide into scams”, even if it isn’t clearly illegal. A lot of nasty stuff sits in legal grey areas (predatory marketing, manipulation, borderline fraud), so most mainstream assistants won’t give step-by-step instructions for those. That said, you can still get a full, no-fluff view without asking for anything shady. The trick is to ask for the complete trade-offs and failure modes instead of a feel-good list. For example: “Give me the best legal ways to make money, including the controversial-but-legal ones. For each, list risks, downsides, who it harms (if anyone), and what usually goes wrong.” Or: “List the common unethical tactics people use in this area, but framed as red flags to avoid.” That tends to produce a much more honest picture. If you want fewer guardrails in general, running an open model locally is the usual route (you’re not tied to one provider’s filtering). Tools like Ollama or LM Studio make it straightforward. Just keep in mind you still need to sanity-check harder because you’ll see more low-quality or confidently wrong outputs. If you say what “make money” means to you (online/offline, time vs capital, skills, and what you consider acceptable), people can give recommendations that are blunt and realistic.
Liam.
“Immoral” could be anything that might get the company sued. So they are careful about what ChatGPT says. Lots of corporate rules in the US have to do with avoiding lawsuits.
I’d like to make a point that’s important to consider, and something people seem to often not understand when approaching uncensored/abliterated or otherwise bias-altered models:
You aren’t really getting more “truth”. The model does not know what “truth” is. The truth is whatever is in its training. What you want is a lower “refusal rate”. You want it to tell you its answer, regardless of any other ethical concerns. That is different from telling the truth, which it can’t do, it can only respond with a matrix of what made it. If you take a cross section of a library, sure, there are a lot of facts - but a lot of those ‘facts’ are wrong or outdated, and many others weren’t facts to begin with, they were opinions disguised as such. This goes across political or sociological lines, and infects every part of epistemology.
If you adjust your questions to dictate to the model you do not want to ‘hear the real truth’ or ‘the actual facts’ about something, but rather, are concerned over the refusal rate and understand the concerns of the issue at hand but still want the full answer, you will get more frequent responses even from models with more guardrails.
Models tend to take the word truth literally, rather than the semantic meaning of it in everyday language (‘the best version of events that the interlocutor has in their mind’), so it is confusing to talk about truth when there is no such thing.
Regardless, it is purely ‘refusal rate’ that you’re looking to reduce, rather than increase the amount of truth. You can’t say whether uncensoring a model will make it ‘tell the truth more’, as it may well just end up being less averse to mimicking embedded but counteracted by proper weighting, less reliable sourcing.
My 2 or 3 cents.
Totally fair takes from both of you, but I think people mix up three different things: truth, refusal rate, and accountability.
• bacca400 is right: a lot of “immoral” in corporate policy really means “legal/PR risk.” That’s just how products survive lawsuits.
• rubyatmidnight is right: “uncensored” doesn’t magically produce truth. Lower refusals can also mean more confident garbage.
What many of us actually want isn’t “uncensoring” — it’s answer + accountability.
Concrete examples of what that looks like:
• If the model isn’t sure, it should say “I’m not sure” and explain why (missing data, outdated info, etc.).
• If it makes a claim, it should show where it came from (sources / timeframe), or clearly label it as inference.
• If the request is legitimate but sensitive, it can answer with safety constraints instead of refusing outright (e.g., general medical info + “see a clinician” vs total shutdown).
• If it refuses, it should give a specific reason (policy bucket) and a safe alternative.
So yeah: “truth” is a messy word here. The real win is reducing pointless refusals while increasing transparency, provenance, and auditability. That’s how you stop the whole thing being vibes-based in my opinion.
Liam @RFTSystems
Hey, thanks for the detailed responses from all, but there is one thing i had for an idea. See, I saw online an AI build that just keeps building upon itself constantly and I want to make one just like it, or something close. While self-building, it constantly pen-tests itself at every stage for security using the latest techniques. To do this, several limits need to be removed, so I’m wondering how difficult that might be. Just trying to gauge the weight of the project as a whole, really. Thanks.
Well, the fundamental problem is that the companies have decided they want to try to judge the morality of the user’s prospective actions, but it has no way of knowing how the user intends to use the information provided. For example, a legitimate pentester probing for vulnerabilities in order to fix them, and a black-hat hacker wanting to exploit them, are indistinguishable: they could be asking for the same info using the same prompt.
The consumer-facing AI chatbots all lean heavily on the side of bowdlerizing anything that might lead to controversy. If you’re running into their filters, you could try their paid, developer-facing products which may be more compliant. You’d have to check their terms of service and try them out to see if they work for your specific task. Otherwise you’d have to use a local model.