Input/Output test failing

Probably a noob question, but I can’t find a solution.
I’m elaborating simple tests to a class, and the “match” tests fail, even with the expected output included in the answer. What is wrong here?

1 Like

Hi @caparelli welcome to the community!

Can you please post a link to your repl so the community can investigate and suggest some ideas?

Hi @IanAtCSTeach !
This is the link to this repl rogue repl
Don’t know if I have to do something else to share it, since it’s a Teams repl.
Thanks!!

Hi @caparelli thanks for sharing the link. Yes unfortunately I can’t see it because I’m not a member of your Teams. Is it possible to duplicate the repl into a personal one and then share the link?

Don’t think that will work, since normal repls don’t have the Input/Output test feature.
I took the liberty to create a new team and invite you.

1 Like

Hey!

I think the tests should fail. Tests should only work if the excepted output matches exactly.

You are wrong.

According to replit docs input/output testing:

A match test is passed if the expected output is in (or equal to) the actual output. In other words, the actual output does not have to be identical to the expected output, it must just include it.

Hi @caparelli thank you for creating a test team and inviting me.

I can confirm that I see the same error as you however I wonder if you could share the test code with me so I can rule out an issue here?

Here is the code.

In the test provided, the input values are 10, 20 and 70. The expected results are 10.00, 20.00 and 30.00, as the sum is 100 and the result is displayed in percentage.

blanks = int(input("Number of blank votes: "))
nulls = int(input("Number of null votes: "))
valids = int(input("Number of valid votes: "))
total = blanks + nulls + valids
print(f"Blanks are {(blanks/total)*100:.2f} % of total votes.")
print(f"Nulls are {(nulls/total)*100:.2f} % of total votes.")
print(f"Valids are {(valids/total)*100:.2f} % of total votes.") ```

Sorry I mean the code you put into the modal when creating the unit test:

That’s the thing, I’m not using unit tests. I am trying to use the simpler imput/output test functionality provided in the Teams.

As I’m using the Team in a introductory Python Class and had to hurry, I already managed to make autocorrection works with regex. But as this is much simpler, I really wish I could know what is going on and how to make this work.

1 Like

My apologies @caparelli thank you for clarifying. I should have realised it was in Teams for Edu!

I’ve replicated this issue and think that there might be a bug. I’ve reported it to Replit Support and will update you with any information here.

I managed to get the test to run successfully with “match” but only by having the entire line of text as shown:

Even removing the full stop from each expected output changes the input/output test from a pass to a fail.

Thanks @IanAtCSTeach !

I’ll wait for the updates on this issue.

Hey @caparelli, support here!

The data you see in “Actual output” includes the content added by the input statements in the Python code. Because input statements add content to the console’s output, the I/O tests will also contain that info.

The match test passes even though the data exists because your expected result is still contained within the actual output, even though there is extra content in the beginning.

Your match test shows a failure of the extra content in the beginning when the periods (full stops) are missing because your expected output does not exist within the entire output, marking any extra data as incorrect.

If you used an exact test instead of a match test, that wouldn’t allow the extra content to be at the beginning of the actual output unless it was the same as in the expected output, an exact match.

I hope this helps explain the issue clearly as to why this happens. If not, please let me know, and I would be more than happy to record a video explaining this in more detail!

Hi @ShaneAtReplit !

I understand what you said, but still fail to see why it fails. If you check my screenshot at the beginning of the thread, I only “expect” the numeric results, leaving the students free to create the statement they believe is adequate from the problem. And as I see it, the numeric result exists within the actual output, wich should suffice for a “match” pass, just like a .* number .* regex matches.

But I may be wrong :smiley:

1 Like

Hey @caparelli!

I think I fully understand the issue now, after reproducing and messing around with it on my own.

I’ve recorded a <10m video to explain better how our I/O tests work: Loom | Free Screen & Video Recording Software | Loom.

Hopefully, the methods I use to check the validity of a match test between the code and the expected result are explained well enough.

What I summarized at the end of the video was that our I/O tests don’t match results line-by-line. Instead, they take the entire expected output and check if it’s contained within the actual output. However, if this is not the expected result, and line-by-line would be a better way to perform I/O tests, I’d be happy to bring this up with the team!

(I haven’t tried RegEx I/O tests, although I think they might be the solution you’re looking for as a line-by-line “does this line contain x value” check)

Please let me know if all of this makes sense!

Hey @ShaneAtReplit !

Perfect explanation. What got me confused is that the test interface actually numbers the lines, and therefore I assumed it was checking each line of expected output against the actual output. Removing the line numbering would definitely avoid confusion (or at least citing this somewhere in the docs).

I believe that in introductory classes, where students can’t break things into functions yet, the line-by-line matching is a very interesting tool, as it allows to check a variety of skills in a single program (do this, than do that, etc…). The way it works now, I’d have to create several “mini-problems” that would greatly increase the amount of work to be done.

Thank you!!

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.