6
$\begingroup$

I've been experimenting with ChatGPT and wondered if it could answer this question on main.

So I pasted the whole question text into the ChatGPT interface:

In MATLAB, I have generated an FIR low pass and high pass filter of order 16. The code is as follows:

filter1 = fir1(16,400/16000,'low') %%pass band-400Hz, Sampling frequency-16000Hz filter2 = fir1(16,800/16000,'high') %%pass band-600Hz, Sampling frequency-16000Hz

The sum of coefficients of low pass filter adds to 1 whereas the sum of filter coefficients of high pass filter does not add to 1. Shouldn't the sum of filter coefficients add to 1?

and it came up with this response:

It is not necessarily the case that the sum of the coefficients of an FIR filter should add up to 1. In fact, for many types of FIR filters, the sum of the coefficients will not be equal to 1.

In the case of the low pass filter you have created using the fir1 function, the sum of the coefficients does happen to be equal to 1. This is because the fir1 function by default uses a windowing technique to design the filter, which can cause the sum of the coefficients to be equal to 1. However, this is not always the case, and it is possible to design FIR filters using fir1 with other windowing techniques that do not result in a sum of 1 for the coefficients.

For the high pass filter you have created, it is not necessarily the case that the sum of the coefficients should be equal to 1. High pass filters have a different frequency response than low pass filters, and as a result, the coefficients of a high pass filter will typically be different from those of a low pass filter. It is possible that the sum of the coefficients of your high pass filter does not equal 1, but this is not necessarily a problem with your filter design.

ChatGPT answers on SO have been banned, but I wondered what people think of this specific example?

My take:

The answer is correct as far as it goes, but it doesn't seem to exhibit any understanding of the deeper details of why it's correct. Hardly surprising, given that the machine is a Large Language Model, so I would expect the language to be good but not necessarily an understanding of nuance.

Kinda like some people, really? :-)

$\endgroup$
6
  • 3
    $\begingroup$ That is not only a meaningless answer, but the few details it gives are wrong. It is a consistent piece of text, and written with conviction, but as an answer here I would downvote it, or at least point out why it’s wrong in a comment. $\endgroup$ Commented Dec 9, 2022 at 1:38
  • 5
    $\begingroup$ @CrisLuengo i.sstatic.net/9aawB.png I'd think the understanding of words alone would suffice... $\endgroup$ Commented Dec 10, 2022 at 14:45
  • 1
    $\begingroup$ @OverLordGoldDragon Beautiful! $\endgroup$ Commented Dec 10, 2022 at 16:03
  • $\begingroup$ @OverLordGoldDragon Glorious! Thank-you for sharing. $\endgroup$
    – Peter K. Mod
    Commented Dec 10, 2022 at 17:41
  • $\begingroup$ I wonder if someone is trying to connect ChatGPT to Watson? Watson doesn't seem to be doing so bad regarding the quality of answers. $\endgroup$ Commented Dec 12, 2022 at 21:15
  • 2
    $\begingroup$ Everyone's talking ChatGPT answers, noone's talking ChatGPT comments... -- also chess $\endgroup$ Commented Feb 10, 2023 at 11:51

3 Answers 3

3
$\begingroup$

I broke it

enter image description here

$\endgroup$
5
  • 1
    $\begingroup$ WELL DONE!!! :-D That's hilarious. $\endgroup$
    – Peter K. Mod
    Commented Dec 8, 2022 at 14:47
  • 1
    $\begingroup$ uhm, OverLord, the DFT always, always periodically extends the data passed to it. The DFT fits $N$ basis functions, all periodic with period $N$, to the data passed to it. That is operationally equivalent to assuming its input is periodic. $\endgroup$ Commented Dec 12, 2022 at 21:20
  • $\begingroup$ It's possible, when one party is not completely in command of all of the facts, for the other party to shut them up with a retort that, itself, is fallacious. $\endgroup$ Commented Dec 12, 2022 at 21:22
  • $\begingroup$ uh, this ain't new? We've been over it. We disagree on whether "assumes input is periodic" is a fair summary statement, not on any substance. I say it's not because it's misleading. Also down arrowing a joke post is a bit stretch, though w/e @robertbristow-johnson $\endgroup$ Commented Dec 13, 2022 at 8:00
  • $\begingroup$ Specifically, I say "input" is the finite $x$, you say it includes the extension $-\infty$ to $\infty$. I think the statement interprets much more strongly per former, especially to someone who doesn't know better. e.g. if one asks "How long is the input to DFT?", we'll surely answer $N$ ... $\neq \infty$. $\endgroup$ Commented Dec 13, 2022 at 8:02
1
$\begingroup$

Quite impressive, yet very prudent

$\endgroup$
1
$\begingroup$

Large language models are capable of convincing writing, with enough trial-and-error - just look at Lambda. I'm unfamiliar with ChatGPT though. Also Lambda's exchange was a social, not technical one, so maybe it won't be as rich... for now.

Curious how they'll perform if tuned directly on Stack Exchange. Also GPT4 in 2023.

Edit: so Twitter scooped this thing inside out - a nice overview's in this YT clip, good stuff starts at 16:12. You can run Linux on it...

$\endgroup$
3
  • 1
    $\begingroup$ Speaking of ChatGPT, I saw this today: maximumeffort.substack.com/p/… $\endgroup$
    – datageist Mod
    Commented Dec 7, 2022 at 22:19
  • $\begingroup$ @datageist hmm this makes me realize, these models are teachable, even with fixed weights, with prompt engineering. Its responses invoke, effectively, "reason", and per having memory, it builds upon former responses and creates new (to itself) knowledge. I wonder if we can teach it DFT. $\endgroup$ Commented Dec 7, 2022 at 22:37
  • $\begingroup$ Yeah, I think the memory is one of the new features of ChatGPT (vs GPT-3, etc). $\endgroup$
    – datageist Mod
    Commented Dec 7, 2022 at 22:44

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .