-6

While I am doing math, computer science or programming, I am greatly benefiting from the use of GPT4 (ChatGPT4) and I don't want that some governmental agency removes access to GPT. I have not had the privilege to access top schools, top teachers, top professors, but GPT4 gives me excellent answers to my questions - both very specific and very general. I consider the access to GPT4 (or any other LLM if there will be one) as necessary condition for my professional and personal growth, for access to good job, education and material conditions. Hence - I consider access to AI, GPT or Large Language Models (LLMs) to be the matter of human rights. And AI could automate physical jobs and hence reduce the exploitation. Governments usually state that they should press people to have physical work because some should do it. Governments usually state that there are no sufficient funds for education, health etc. But AI can boost those incomes too. So - having access to AI (both individually and collectively) should not only foster the growth of individual, but also preserve and guarantee his or her dignity as a person and provide for the practical resources for the implementation of other human rights, including social human rights.

I acknowledge that there can be concerns about privacy and other threats that AI can cause, but we should acknowledge that there should be human rights to access AI and that we should concert those private and threat factors not as the issues of isolated matter, but in the balance with the human rights to access AI.

What is the literature on this. Or maybe this question is the first suggestion about human rights to have access to AI?

Information added. Some commentators have stated that my question involves expression of policy preferences. With all due respect, I should acknowledge that the stated expressions about exploitation, work and accumulation or resources are not my policy preferences but the aspects of the future of the work that are considered in the NBER working paper https://www.nber.org/papers/w30172 titles "Preparing for the (Non-Existent?) Future of Work".

Information added. Some simple experiments with ChatGPT+/4 shows that this kind of AI could be sine qua non resource for jobs and education, something like social human rights, not just access to telephone or libraries.

Infromation added. Actually there is concept about digital divide and there are programs that tries to close the digital divide/gap and such program involve social program for providing access to computers, high-broadband internet network and also the relevant computer literacy programs. And there have already been calls to extend human rights so that they can reduce the digital divide, e.g. https://www.opensocietyfoundations.org/publications/digital-divide-and-human-rights So, actually access to AI can be part of this movement. Of course, AI can be and will be costlier and also AI-enchanced society could multiply the harmful consequences of digital divide, hence bringing respective rights to the list of the most important rights.

5
  • 5
    I’m voting to close this question because it belongs on politics.stackexchange.com Commented Apr 16, 2023 at 23:25
  • 1
    What about the rights of a person who created the AI? Commented Apr 17, 2023 at 14:17
  • What about the rights, whose data were used for training? What about open source science articles, whose researches receive very modest wages? What about the situation when AI gains skills through self training? Lot of questions, but some in the community chooses to close question.
    – TomR
    Commented Apr 17, 2023 at 15:02
  • 1
    @TomR The rights of those whose data is used are pretty straightforward; the difficulty comes in asserting and/or enforcing those rights. If you would like a more comprehensive discussion, you should make a separate question focusing specifically on that.
    – Michael
    Commented Apr 17, 2023 at 20:24
  • 1
    Can you verify that the AI researchers have low salaries? Even in academia I expect many are well compensated. And where are you expecting to assign the rights when an AI self trains?
    – doneal24
    Commented Apr 17, 2023 at 21:06

3 Answers 3

3

There's no such right, but a government could create such a right just as it's possible to guarantee a right to telephone service or to public libraries. Some governments do this, some don't. Rights of this nature are not so fundamental as to be found in documents such as the Universal Declaration of Human Rights.

5
  • 1
    Comparison: Some countries have declared a right to Internet access. And not in the silly USA/user6726 sense of "we won't punish you for accessing the Internet" but rather "we will try to give you an Internet connection at your house.") Commented Apr 17, 2023 at 12:22
  • @user253751, your point isn't totally clear to me, but whenever one person's "right" is the work product of another you implement a servant/master relationship. Commented Apr 18, 2023 at 0:59
  • @MichaelHall all employment is a servant/master relationship Commented Apr 19, 2023 at 3:40
  • @user253751, nope, it isn't. You can quit your job. Commented Apr 19, 2023 at 3:57
  • @MichaelHall you can also quit your job as an internet installer, and if there are very few internet installers, the government may declare this is no longer a right (the population will find it hard to disagree if nobody is willing to do the job), or pay more for internet installers Commented Apr 19, 2023 at 4:56
2

It depends on what country you are in, naturally. In the US, you have a right to access an AI, at least as long as you correctly understand what that right is. It is not illegal to access an AI, though there is no guarantee that you will provided access, free or otherwise. If you can find an AI and find a way to access it, you will not be punished in any manner.

The government can by law restrict your freedom to exercise rights, even in clearly-articulated constitutionally guaranteed rights, such as that right guaranteed by the 2nd Amendment. Rights in the modern legal view are not absolute, they can be subordinated to other governmental concerns. You have the right to dispose of your property as you see fit, with some exceptions such as setting your house on fire without a burn permit (depending on local ordinances).

Restrictions on your exercise of your rights, especially those that are constitutionally protected, require a compelling government interest. Your right to access an AI follows from First Amendment, as does your right to read books, watch movies, buy a computer on which to read and write books and watch or produce movies.

Note, however, that this depends on a particular Natural Law understanding of rights, as facts intrinsic to being a person, and not as a boon from the sovereign, which is the more traditional view of the notion of "rights". Both perspectives exist in legal systems, and the choice between these viewpoints is purely political.

2
  • So, all the Law stackexchange should be moved to the Politics.
    – TomR
    Commented Apr 16, 2023 at 23:50
  • 4
    No, just the questions that focus on what the law should be. Alternatively, Philosophy SE. The political justification of laws belongs there; Law SE does however consider the internal logic of a legal system, such as US law, looking for resolution of conflicts between the Constitution and particular legislative actions.
    – user6726
    Commented Apr 17, 2023 at 0:05
2

In , the freedom of speech is matched by a freedom to listen, the right to inform oneself "from publicly available sources." (Art. 5 GG)

These freedoms are not unlimited. Just as speech can be limited by a prohibition of slander, there is no right to play heavy metal at maximum volume when normal people want to sleep. Yet no limitation of these freedoms may touch the "core" of a freedom. Notably, the words of the freedom of speech imply that the speaker would be a human, while the freedom to listen does not imply that the speaker must be human. But ChatGPT is much too young to have created established legal precedent.

On the other hand, there is talk in Europe of banning or more likely regulating AI. Such regulations might well be constitutional if they are seen as necessary and proportionate in the protection of other constitutional rights. I can see two issues here:

  • The owner of the AI, not in Europe, may be unwilling to comply with regulations because doing so would be incompatible with the business model. AI may have started as a research project, or a proof-of-concept, but sooner or later owners will have to monetize it to afford the operation. Their model could be paywalled access, or targeted advertising, or the generation and sale of user profiles. The latter two might become prohibited.
  • The owner of the AI may be unable to comply with regulations because they require things the AI is unable to do. For instance, there are suggestions that any decision by an AI which affects humans must be documented in a way that is comprehensible to human auditors. The owner of the AI might be unable to keep the AI from prohibited actions, and the system might be incomprehensible to humans.

Not the answer you're looking for? Browse other questions tagged .