You're going to have a fairly limited range where success (of some level) is possible, but isn't guaranteed.
For 1d6, it's obviously impossible to roll doubles. 7d6 guarantees at least one double by the pigeonhole principle. So only from 2d6 to 6d6 do you have any uncertainty at all. Because my conditional probability skills have atrophied, I just brute-forced the chance of success from 2d6 to 6d6 with Python:
>>> from collections import Counter; from itertools import product
>>> def prob_success(n):
... c = Counter(len(x) == len(set(x)) for x in product(range(1, 7), repeat=n))
... return c[False] / sum(c.values())
...
>>> prob_success(2)
0.16666666666666666
>>> prob_success(3)
0.4444444444444444
>>> prob_success(4)
0.7222222222222222
>>> prob_success(5)
0.9074074074074074
>>> prob_success(6)
0.9845679012345679
You'll note the odds of success rise quickly; the birthday problem (or "paradox") means your odds of a repeat increase faster than a mere one-sixth for each additional die, so even within your range with a possibility of failure, three of five pools are heavily weighted towards success, and only one is heavily weighted towards failure. You'll have basically no ability to provide granular, predictably increasing levels of difficulty.
As it happens, the generalized birthday problem (for a "year" with six "days", looking for two or more "people" having the same birthday) describes your exact situation, so if you want to look for a general solution, that's the mathematical tool you'd use to solve for arbitrary numbers of dice (it's adaptable to different die sizes too), with arbitrary degrees of success (doubles, triples, etc.).
For triples, you have the same obvious end points (impossible below three dice, guaranteed to get a triple with 13 or more dice). Because I don't trust myself to apply the birthday problem, a generalized Python function that solves (incredibly slowly for large inputs) for arbitrary numbers of repetitions on arbitrary numbers of dice would be:
def prob_success(n, rep=2):
c = Counter(max(x.values()) >= rep for x in map(Counter, product(range(1, 7), repeat=n)))
return c[True] / sum(c.values())
# Or the much faster probabilistic one that gets the similar results for all the cases the exhaustive one can compute in reasonable time:
def prob_success_p(n, rep=2):
d6_opts = tuple(range(1,7))
c = Counter(max(Counter(random.choices(d6_opts, k=n)).values()) >= rep for _ in range(100_000))
return c[True] / sum(c.values())
from 3 to 10 dice (stopped at 10 because exhaustive search is slow), the probabilities go:
3d6: 0.0278
4d6: 0.0972
5d6: 0.2130
6d6: 0.3673
7d6: 0.5409
8d6: 0.7074
9d6: 0.8425
10d6: 0.9325
11d6: 0.9788 (from probabilistic solver)
12d6: 0.9966 (from probabilistic solver)
which shows a less extreme version of the pattern looking for doubles; the probability of triples starts low (2.78% with three dice; basically, the odds that the other two dice happen to come up the same as whatever the first die rolled), but grows quickly; adding just one die makes a big change in the odds of "super success".