-52

This is the third in a series of posts that we’re using to show (broadly, directionally) some of the potential use cases for generative AI in the Stack Overflow and Stack Exchange environment. 

Previous posts:

  1. An example of a generative AI tool
  2. Input on a question title drafting system

We’re releasing an experiment based upon Yaakov’s title selector that I posted about previously. We started collecting feedback on Meta Stack Overflow about a week ago but now that the experiment is live on Stack Overflow, we want feedback on the use case for Stack Exchange sites. We expect the experiment to run for about 14 days or until we reach about 9000 questions.

What is a Title-Drafting Assistant?

We know that folks often use titles to make decisions about whether or not to dig into a particular question. Bad titles make that much harder, obviously. So in order to make that process easier (and improve the quality of questions and answers on the platform), we’re experimenting with providing some AI-generated title suggestions. 

Why do this? We see three main benefits here. First, question askers spend less time crafting the perfect title for their questions, and instead can focus on the content of the question. Second, question reviewers are able to better understand the content of the question, making it easier to suggest edits or improve the post. Finally, end users of Stack Overflow can more easily understand if the question is relevant to their needs.

Here is a video walkthrough of the experimental conditions.

What are we looking to understand with this experiment/how are we measuring success here?

We’re principally interested in assessing five core questions:

  1. Do users’ questions perform better when they’ve received title-drafting assistance?

  2. Are titles written with assistance edited more or less often by community members?

  3. Do readers and answerers interact differently with questions that received title-drafting assistance? For example, via reviews or comments.

  4. Do better titles lead to a reduction in users – particularly new users – abandoning their questions? Does it increase the rate at which new users come back to the site in the future?

  5. How many users will accept the titles that are recommended to them?

What other use cases can you imagine for a Stack Overflow title drafting assistant? Given how the feature is going to look and how we believe that people will use it, are there reasons you would be skeptical of the results of the experiment?

Leave your response as an answer below, and I’ll be checking in on them regularly.

18
  • 47
    The experiment has hardly even begun and we're already starting the process of adding it network wide?
    – Kevin B
    Commented May 18, 2023 at 16:14
  • 23
    The walkthrough video is of poor quality. I can't really see what is going on, and I can barely hear what is being said. Commented May 18, 2023 at 17:05
  • 12
    Given that SO has way more titles as training data than other sites, how well is this expected to work on smaller sites, especially ones that are not technology focused (assuming this would be deployed network-wide)? Will a separate model be trained for each site?
    – cocomac
    Commented May 18, 2023 at 18:19
  • 104
    "Bad titles make that much harder, obviously." does it though? More often than not, a bad title is an indication of a bad question. Now, we're disguising these questions as good questions, only for us to click through to them and find garbage.
    – Kevin B
    Commented May 18, 2023 at 18:37
  • 4
    @KevinB As I've stated before, but want to state again here, I'm not so sure about that. I'm more convinced that we'll see garbage-in, garbage-out happening. Ex. poorly narrowed-down problem in question body -> bad (vague) title.
    – starball
    Commented May 18, 2023 at 21:42
  • 4
    @starball that's not what i've seen thus far.
    – Kevin B
    Commented May 18, 2023 at 21:44
  • 34
    Will users be able to tell (via the system telling them, not just guessing) that a title was chosen from this tool and not something they've written. That could clear up any potential confusion if the title ends up not making sense in relation to the question. Commented May 19, 2023 at 9:56
  • 12
    Is staff going to start using this assistant as well for their announcements? It's kinda funny that the same people who produce posts like this one are suddenly so concerned about having clear and meaningful titles.
    – Dan Mašek
    Commented May 20, 2023 at 13:07
  • 14
    a blunt comment: I'm surprised at the low production quality of the videos... I'd have expected a company this big to make something nicer. the speaker doesn't sound very engaged, you can hear them getting sound notifications in the background, and you're basically just showing the video script and your internal design tools... who is the video for? If it's just for the user community, why make a video when you know we have a whole thing about conveying things with priority for text above images? If it's for your paying customers, ...
    – starball
    Commented May 22, 2023 at 3:55
  • 2
    "Leave your response as an answer below, and I’ll be checking in on them regularly" - but you are not. I counted the amount of responses (comments) from you, or any staff, to the answers. The count is exactly 0. I'm sad and disappointed. Expected more. Commented May 22, 2023 at 13:54
  • 1
    @ShadowTheSpringWizard Why do you read "checking in on" as synonymous to "respond to"? I interpreted it more as "read and consider them with folks internally".
    – zcoop98
    Commented May 22, 2023 at 15:51
  • 8
    Maybe add a sixth core question to assess: "Does the community want this?"
    – MMM
    Commented May 25, 2023 at 15:10
  • 2
    "First, question askers spend less time crafting the perfect title for their questions, and instead can focus on the content of the question." -- this is backwards. As you say yourself, the title is arguably the most important part of the question. Generating the post from a good title and, say, an error message (or maybe even an interactive chat-bot thing) seems far more fruitful.
    – Raphael
    Commented May 26, 2023 at 11:30
  • 2
    I personally don't see the reason behind all the hate against AI here, and I believe we should make use of AI tools as much as we can, as long as it is an improvement.
    – Emre Bener
    Commented May 29, 2023 at 12:38
  • 5
    @Mephisto It's not really hate, but rather strong skepticism. The future will tell. However, in this case it's mostly "even AI won't be able to improve anything here, because there is a fundamental problem with user generated content that needs to be addressed first". You may be able to use AI to address that problem, but the company isn't doing that. They seem to believe that one can generate good titles out of bad content. I doubt that. AI isn't magic. A good use of AI would be in my eyes to draw peoples attention to bad content earlier and only later help with good titles for good content. Commented May 30, 2023 at 9:44

8 Answers 8

106

Given how the feature is going to look and how we believe that people will use it, are there reasons you would be skeptical of the results of the experiment?

Frankly, Philippe, I am extremely skeptical of the experiment for a whole list of reasons. For the most part, they've been outlined well enough in feedback other users have already provided (here, here, here, here, here, and here), so I'll try to be brief. First of all, the stated benefits of the feature look rather "empty" and unsubstantiated:

First, question askers spend less time crafting the perfect title for their questions, and instead can focus on the content of the question.

The problem here is that users have never, in general, spent any significant amount of time "crafting the perfect title". Has the company conducted any research as to whether users are slowed down by having to come up with a title, or is this just a vacuously true statement? As a damning evidence to the contrary, let's take a quick look at the slice of newest question titles on Stack Overflow:

screenshot of the newest tab on Stack Overflow with a list of questions with no-effort, incomprehensible, or incorrect titles

In case this isn't obvious from the screenshot, all of the titles shown (you can repeat this experiment a thousand times, and the output will not change) are either: borderline incomprehensible, non-descriptive, and sometimes outright gibberish. And all of them show a distinct lack of care, thought, and research put into them.

Users who can carefully craft question titles are few and far between, and they are generally capable of making one without any assistance of an LLM tool.


Second, question reviewers are able to better understand the content of the question, making it easier to suggest edits or improve the post.

The concern here is that it also is unsubstantiated. Surely, the titles may start to look better and show higher proficiency in English, but can you show that the usage of an LLM tool will solve the fundamental issues at hand leading to bad titles: lack of experience, general incompetence, severe gaps in knowledge, inability to identify problems, etc.?

Unless the tool is able to address all those, what you will get in the end is plausible-looking misleading nonsense that will certainly not make it easier to understand questions or suggest edits to posts. In fact, as many others have already mentioned, it might lead to more curators skipping over well-written titles due to them looking superficially fine.

Finally, end users of Stack Overflow can more easily understand if the question is relevant to their needs.

The same concern about the second point applies here: can the company substantiate the claim at least in theory since a better title is no more useful for understanding whether the question is relevant than a worse one if it is still nonsensical or misleading?

In fact, this is one of the reasons there is a temporary ban on AI-generated content on Stack Overflow (and many other sites of the network): the content tends to look good while being either incorrect, misleading, nonsensical, or outright harmful.


As for concerns regarding the "five core questions":

Do users’ questions perform better when they’ve received title-drafting assistance?

If you will only be looking at whether title-assisted questions get more upvotes and less downvotes, you risk misinterpreting improved reception due to the post looking better as the post being of higher quality. You need to investigate a whole complex of factors, including, most importantly, how often are the posts closed as needing improvement.

Are titles written with assistance edited more or less often by community members?

Here you risk mistaking a reduction in edits due to the title looking superficially better for the post requiring less improvement. On the one hand, as also pointed by others, editors might start skipping posts with well-written titles. On the other hand, if the posts start to get more edits, you need to look at what edits are made qualitatively, as an uptick in edits can mean the generated titles are even worse than the manual ones just as much as being easier to understand and thus edit.

Do readers and answerers interact differently with questions that received title-drafting assistance? For example, via reviews or comments.

Again, this depends on how you are going to approach the analysis. If it is going to be the usual "there was an X increase in number of Y", you risk severely misinterpret an uptick for an improvement (which might just as well indicate that the tool makes content less understandable requiring more back-and-forth).

A downtick in a purely quantitative analysis can also either mean that the content requires less interaction to parse or that the title makes it harder to engage with the post in equal measure.

Do better titles lead to a reduction in users – particularly new users – abandoning their questions? Does it increase the rate at which new users come back to the site in the future?

The question abandonment criterion is highly prone to misinterpretation too: more users abandoning drafts might be due to the UX accompanying the change being subpar and / or confusing. A reduction in abandoned drafts might mean that the velocity of posting a question is simply greater due to less effort required from the user.

It does not take into account that greater velocity might actually be a bad thing precisely because the users do not need to spend effort to formulate their title. It also does not consider that a greater number of abandoned drafts can mean that users come up with a solution or realise their mistake while trying to come up with a title on their own.

How many users will accept the titles that are recommended to them?

Finally, this criterion is unlikely to provide any useful information at all. Question askers, as a general rule, come from a position of lack of knowledge. More often than not they are incapable of determining where the problem even is (and the "quality" of existing titles attests to that), let alone judge whether the generated title is actually better and not better looking.

It is highly likely that users, and especially new users, will choose one provided to them solely on the basis of it being well-formed. I also pose that the ones who will choose to write their own title will likely be ones who already know what they are doing (and thus find the suggested titles insufficient / incorrect / missing the point).

Unless you can account for that, I highly doubt this will be a useful data point.


To summarize the section about the core questions (the benefit ones can be reduced to a single concern: are those just empty statement, or can you actually show the usage of the tool, in principle, can provide such benefits), please, when you eventually mark the experiment as a success, do not just say:

  • "the performance criterion was satisfied as there posts started to get more upvotes and less downvotes";
  • "the editing criterion was satisfied due to posts receiving less edits";
  • "the interaction criterion was satisfied because there have been more comments / answers on those posts";
  • "the abandonment criterion was satisfied as more users have seen their posts to completion";
  • "the acceptance criterion was satisfied due to a lot of users accepting suggested titles".

Those are all very flawed metrics to use, and I worry you'll just miss the forest for the trees with this initiative.

8
  • 23
    "Finally, this criterion is unlikely to provide any useful information at all. Question askers, as a general rule, come from a position of lack of knowledge" just want to link this point back to the beginning - these are going to be the same users who barely cared for their title anyway. Will they accept one? Likely - saves them typing. Also worth noting that the UI can very well confuse users into selecting a title. Often enough with the old ASK page we have had users adding completely irrelevant tags. The page said you can have up to 5 tags, which some users took as "must have 5".
    – VLAZ
    Commented May 18, 2023 at 18:16
  • 13
    @VLAZ indeed, this is exactly what is worrying me: of course they will accept a suggested title, why wouldn't they? No need to come up with one, fiddle with filters not allowing obvious trash, to type it out, etc. And yeah, I even mentioned the UX being more confusing potentially skewing the results. Commented May 18, 2023 at 18:29
  • 2
    Based on my experience in the review queues, I think the biggest problem for people who struggle to meet community standards is a language barrier. A significant portion of posts from new contributors require a great deal of focus just to comprehend what they're asking in the full question because, I presume, they just don't know how to write in English very well (all due respect to non-native speakers). How will an AI cope with that, and what effect will it have/not have on how these users interact with the site & community?
    – ABabin
    Commented May 20, 2023 at 22:04
  • @ABabin did you mean to post it under the question? :) I agree this is yet another concern and a question SE needs to ask when evaluating the results. Btw, do post it as an answer here if you want - that's definitely a valid concern about the tool. Commented May 20, 2023 at 22:15
  • 1
    @Oleg Well, I think it's relevant to your point about "plausible-looking misleading nonsense", which is where my main concerns about this (and generative AI in general) lie. Forget if they care or not - do they even understand? Does it matter? I do not know how LLMs handle bad or incomprehensible grammar or how SE feels about the constant stream of it so I just wanted to prompt someone else who is to elaborate further.
    – ABabin
    Commented May 21, 2023 at 2:58
  • 2
    @ABabin all good then, I was just curious whether you wanted to address that to the company or add additional point to what I described. More on point - on the one hand, use of LLMs might, indeed, help alleviate issues with ESL users not being able to properly formulate their titles due to not being proficient enough in English as such models are well-equipped for "understanding" linguistic patterns. On the other hand, it is also unlikely that better-formulated titles will do them much good as LLMs can only regurgitate what they were fed. Commented May 21, 2023 at 4:51
  • 3
    It is highly likely that users, and especially new users, will choose one provided to them solely on the basis of it being well-formed. <--- We see this already with tag suggestions. On MathOverflow we get new/unregistered users posing off-topic elementary questions with irrelevant tags they don't even know the intended meaning of, otherwise they wouldn't have taken what the system recommended based on autocompletion. For example someone has a question on basic trigonometry, and thinks [trignometric sums], suggested by the system, fits. Commented May 22, 2023 at 8:36
  • 1
    @DavidRoberts oh, that's actually a great point - we can already observe this behavior with tag suggestions indeed. Same on Stack Overflow - the system pops up a list of suggested tags based on user input, and they proceed to slap anything that looks even remotely related no matter whether it actually is... Commented May 22, 2023 at 12:31
49

Your goals / benefits seem strange and/or out of order

First, question askers spend less time crafting the perfect title for their questions

Do you really think that

  1. there is such thing as a perfect title? (There often isn't. For example, there are often tradeoffs between technical correctness in terminology-usage and searchability for people with the knowledge-gap that motivates having the question)
  2. all questions askers care about the quality of their titles? (from experience, a large portion of them don't care, and/or haven't developed the skill)
  3. all question askers spend meaningful time on their titles? (same as above).

and instead can focus on the content of the question

hm. The title is an incredibly important part of the question. If getting it done well means spending time and thought on it, spending less time on titles seems like a slightly perverse target objective.

I don't think writing a good title is particularly hard. Asking a good question is what's difficult (at least when it comes to programming questions): particularly narrowing down potential causes of issues to arrive at a question with a good scope. See also my gem/geode analogy in my MSO post.

Second, question reviewers are able to better understand the content of the question, making it easier to suggest edits or improve the post.

I don't see the connection between understanding the question based on a title and it becoming easier to suggest edits or improve the post.

Most of the edits I make are to fix cosmetic problems (spelling, grammar, punctuation, formatting, denoising, etc.). Titles have little to no effect on how easy it is for me to make those edits. If you really want to save my time with respect to that, give me better tools, or better yet- hack at the root of the problem and give askers tools that point out / fix those problems.

If a question already has clarity problems (primarily with missing information), a better title will not magically make it be missing less information. When I fix those problems, it's from info solicited through comments (not from titles), and it's where the asker hasn't learned yet that they're not supposed to answer solicitations for more info via comments, and instead should be editing the post themselves (that would be a nice problem to see better tackled).

Finally, end users of Stack Overflow can more easily understand if the question is relevant to their needs.

^ This is really the most important goal.

A title should be representative of the question. It should be clear, and it should be non-ambiguous.

The vast majority of users of this platform are not askers or answerers, but searchers / future readers. Failing to create a title with the above properties is a failure to serve those users' needs. And speaking from experience as a reader / searcher, it's very frustrating every time.


For my other thoughts, see

  • my response to the earlier MSE question where I ask for better promotion of reading the Help Center pages, suggest experimenting with models trained on a per-site basis, and suggest giving the model training feedback based on community-made title edits.

  • my response to the MSO question where I spell out the limitations of evaluating success based on how much the community edits the title, and transition into spelling out the deeper limitations of title suggestions ("garbage in, garbage out"), and circle back to asking for better promotion of reading the Help Center.

40

What concerns me about this plan is that the metrics and analyses being proposed seem (largely) designed to increase engagement with the questions, rather than improving the quality of the content as a source of knowledge.

Since algorithms are trained to optimize measurable outputs, it's not hard to imagine that such an algorithm will tend toward suggesting "click-baity" titles that users accept without modification, rather than titles that really distill the meat of the question down into a useful one-line summary.

That said, I like the tool as shown in parts 1 and 2. I would feel reassured to see more emphasis on designing (or emphasizing) metrics that focus on quality, rather than engagement.

3
  • 1
    If I see a bad title (a title that is not good at representing its question clearly and non-ambiguously), I either edit it or downvote. AI-generated or not. I'm assuming that both those actions will count negatively towards the success measurements. What I do find annoying is that both those actions take my time to read and understand the actual question (but again, the same goes whether it's AI-generated or not).
    – starball
    Commented May 18, 2023 at 21:51
  • @starball Yes, I agree - I think these actions would count negatively, and I support that. How much weight a title being edited by the community holds versus the weight that increased engagement is given is a design decision that will need to be made by the StackExchange team. And I hope they weight the first one more!
    – tmpearce
    Commented May 18, 2023 at 22:00
  • 2
    *elaboration on my previous comment just to be clear: It's annoying to me to have to read and understand a question based on its body in order to improve its title- especially when I'm not interested in further engaging with the question (to answer it), which is why I sometimes settle to downvote.
    – starball
    Commented May 19, 2023 at 4:03
27

It disturbs me that this is the third official post about this proposed change (and now the second I've commented on to this effect) and it still contains no indication as to whether the ML model being used is fully bespoke, or is a license to a third party model like GPT which has been customized with SE-specific data.

This is extremely important for how acceptable the tool is. You've posed five questions to the community, and none of them are about ethical concerns with the training dataset. Yet there are also no assurances about what the training dataset even is. What are we to take from this? That SE is using a bespoke model with no third-party training data, but not mentioning this laudable achievement for mysterious reasons? Or, less optimistically, that SE is not using a bespoke model, has instead licensed a pre-trained third-party model to customize, has decided that all ethical and legal concerns with the training datasets for these models are merely the cost of doing business, and does not care if the community has any feedback otherwise?

I've tried to avoid jumping to conclusions about the backend of this feature, but it's getting very difficult when it seems like SE is purposefully avoiding discussing the elephant in the room.

1
  • 1
    You mean AI driven elephant in the room. Commented May 22, 2023 at 18:28
11

First off, I fully agree that titles are frequently... not great, especially by new users. I do have some concerns, though. First off, you say

We're releasing an experiment based upon Yaakov’s title selector ... We started collecting feedback ... about a week ago but now ... we want feedback on the use case for Stack Exchange sites [Quote trimmed for brevity]

As I'm sure you know, SO has a ton of questions. While I can see how this could (potentially) work on SO, or perhaps even AU, SU, and SF, I do worry about how well this would work for other sites. As an example, there are many sites that are... not about technology, and so a model trained on SO titles is somewhat unlikely to work well there (or, even if it also includes titles from those sites, the number of titles on SO may drown out those ones). So, I have a couple questions about that:

  1. Will there be a separate model for each SE site?
  2. Would this be an "opt-in" type feature where sites that want it can ask on their per-site Meta and ask? Or will all sites automatically be added to it?

Next, I have a concern about one of your criteria:

Do users' questions perform better when they’ve received title-drafting assistance?

What does "perform better" mean? Upvotes would be a decent measure. Views is NOT. As Kevin B said (in a comment), there's a chance that this experiment would mean more people see a good title, only to find a bad question behind it.


I don't have any idea if this is possible, but is there some way we could get the data on questions that had a title made with it, such as in SEDE? I don't know if that is possible, but it would be nice if we could analyze that data too. I also understand if that isn't possible, though.

2
  • 1
    "not about technology, and so a model trained on SO titles is somewhat unlikely to work well there" - see also my post here
    – starball
    Commented May 19, 2023 at 7:18
  • My additional though on measuring views: I think the timeframe being used for the experiment is too short to glean anything from views. And I've seen questions with lots of views and low score precisely because of misleading (ambiguous or poorly representative) titles.
    – starball
    Commented May 19, 2023 at 7:20
10

A user's title helps us understand a little bit about their level of their English knowledge, electronics/programming knowledge and laziness even before we jump to the question.

When reading the question, I always have in the back of my mind the title, until I see the user's actual question(s). So the title is something that I "take with me" while reading the question.

If the title is auto generated these things are taken away and if the question is really short (which sometimes is), I don't have much to judge overall.

EDIT:

Forgot to mention, a title generated by the user, shows how the human thinks in order to ask a question. Other people looking for their similar issue, will think like the user would, not the AI to ask the question. So it is easier for people looking for the same issue to find titles generated by humans rather than AI.

Don't forget, it is a website to ask questions that (probably) cannot be solved by AI. Otherwise we would use Chatgpt and not the website.

Other than that, yes it is a nice thing to have but I suggest:

  1. Let it be available after a certain reputation level.
  2. Or/And let the user(s) enable that feature in the settings, don't have it enabled by default.
3
  • 1
    If the question is really short it likely is not clear enough. All the important information should be contained in the body and the question should be comprehensible without the title. The title is simply a summary of the question body. As such I can understand the desire to form better titles automatically. However, my fear is that this fails because most of the question bodies do not contain sufficient information and it's more like polishing a turd. But in principle you should be able to judge a question without its title. If that is impossible, maybe it should be closed. Commented May 30, 2023 at 9:39
  • @Trilarion Hm how about users with X+ reputation can use this feature and get their title suggestions?! Id like to tell newbies/amateur users apart. Commented May 30, 2023 at 9:40
  • 2
    That might be a good idea. You can tell newbies apart also from their rep or how long they are active on the site. But ultimately one should only judge each question on its own and only by its content, never by the users who created them. The content has to stand on its own. If it cannot, something is wrong. Commented May 30, 2023 at 9:48
-2

It's an extra tool and I welcome it as such.

I'm particularly enthusiastic for it on a much lesser active network than Stack Overflow - DSP.SE. Many uncreative, generic titles, that effectively nobody cares to do anything about. This results in duplicative titles for non-duplicative questions, which pollutes search results and lowers the usefulness of the "Related" tab.

The network also doesn't care about poor quality questions. But two problems is worse than one. Poor titles also don't overwhelmingly correspond to poor questions from my observation.

No, the solution isn't "start caring", because it won't happen. So yes, optional assist bots are great. That is, automated title blocks based on keywords or other "quality criteria" is something that should be limited to Stack Overflow or whichever networks insist on it, as I find it quite counterproductive.

6
  • 3
    I'm curious- on dsp.se, is it just newer users who write poor titles? Or does it include more established users? Because if it includes more established ones, that's a culture problem that needs to be addressed.
    – starball
    Commented May 19, 2023 at 18:42
  • 1
    @starball It's rarely established, but there's far worse "culture problems" that need addressing. It's only halfway a "StackExchange" network. I, who considers SO mods hawkish, would love to throw an SO moderator at them. Commented May 20, 2023 at 13:17
  • 1
    Here's what I usually do when I'm not interested in spending my time improving the title myself: As instructed in [ask], can you please write a descriptive, non-ambiguous title? For more guidance, see [How do I write a good title?](//meta.stackexchange.com/q/10647/997587).
    – starball
    Commented May 20, 2023 at 18:46
  • 2
    @starball: What is the conversion rate? 50%? 10%? 1%? 0.1%? Is a title ever changed (to a satisfactory degree) as a result? Commented May 21, 2023 at 11:55
  • For illustration, do you have some examples of such titles, incl. ones that are (currently?) blocked by 'automated title blocks'. Commented May 21, 2023 at 12:21
  • @This_is_NOT_a_forum I never measured it. But high enough to surprise me. I want to say probably more than 10%, but I have no data.
    – starball
    Commented May 21, 2023 at 17:23
-6

What other use cases can you imagine for a Stack Overflow title drafting assistant?

Since you specified Stack Overflow, please have such an assistance provide guidance that opinion-based questions are off-topic on Stack Overflow whenever it detects the phrase "best practice" in the title, and block users from proceeding until it is removed.

This request could be expanded significantly to cover more common faux pas in questions posted to Stack Overflow, but this is the single lowest-hanging fruit that I, personally, am most interested in seeing improved.

9
  • 9
    what I've learned from being "scolded" by Cody is that "bad" keywords / key-phrases does not mean bad / unsalvageable question, and that editing can often fix things up if you're willing to spend the effort.
    – starball
    Commented May 18, 2023 at 22:46
  • 1
    Also, this just seems like simple pattern matching and nothing you'd need an AI for. Not that the posed discussion question specifically asks for what you'd need an AI for, but if we're really going to make these asks here, there are probably a lot more of them in our list of feature requests.
    – starball
    Commented May 18, 2023 at 22:48
  • 4
    Wouldn't this most likely cause porblems?
    – Someone
    Commented May 19, 2023 at 0:38
  • @Someone Only if you're trying to ask a question about best practices, which is not something you should ask about on Stack Overflow.
    – TylerH
    Commented May 19, 2023 at 14:47
  • @starball Sometimes you can fix questions, but a significant number (most) questions I encounter with that phrase in the title are inherently off-topic; they cannot be salvaged without completely changing what OP is asking, which is OP's prerogative. I'd rather make OP ask a good/on-topic question at the moment of typing/submission rather than have to spend reviewer time and votes closing it.
    – TylerH
    Commented May 19, 2023 at 14:48
  • @starball As for pattern matching vs AI, I agree, but they haven't done the pattern matching thing despite asking for it, and this is the first time they have asked for suggestions on things a "title drafting assistant" can do. Arguably pattern matching is also "assistance", so really, it all falls under this bucket. I don't particularly care how it gets implemented; the major win here for the site would be that it is implemented at all.
    – TylerH
    Commented May 19, 2023 at 14:50
  • @TylerH it's more likely that people would start asking about "best parctices" rather than stop asking about "best practices," just like the "problem" filter caused people to ask about "porblems" instead.
    – Someone
    Commented May 19, 2023 at 14:54
  • 1
    @Someone Sure, and I think an AI assistant would be somewhat robust in terms of handling the common iterations. I hope the developers working at Stack Overflow, of all places, are capable of more than a single literal string match.
    – TylerH
    Commented May 19, 2023 at 15:14
  • @starball See, for example, q/76284518/ vs q/76288638/ -- the former had a title that looked off-topic but was editable easily enough to focus on the on-topic question OP was really asking. Meanwhile, the latter is so inherently off-topic that OP would have to change it into an entirely different question to make it on-topic. That latter kind of question should never have been allowed by the system, and if this feature were implemented, maybe it would've been prevented.
    – TylerH
    Commented May 19, 2023 at 15:35

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .