*I like the fact that people have a diversity of opinions on ways to solve
Last week marked the release of Ubuntu 22.04 LTS <https://ubuntu.com/blog/ubuntu-22-04-lts-released>, and so far it looks like some noteworthy improvements over the last LTS, notably Wayland being enabled for Intel/AMD users by default! There is no Wayland enablement for Nvidia users in 22.04 LTS though, and Pulseaudio is still the default audio server instead of the more modern pipewire+wireplumber that's been appearing in other distributions. It also appears that Firefox in 22.04 LTS is now installed as a snap at the request of Mozilla, and it drops the deb package from the official repo. Also interesting is that Ubuntu will continue to avoid supporting flatpak in favor of snap, as explained by Mark Shuttleworth: https://www.omgubuntu.co.uk/2022/04/ubuntu-wont-support-flatpak-anytime-soon *>I can say right now Flatpak’s wouldn’t work for us. I don’t think they have the security story and I also don’t think they have the ability to deliver the same integrity of execution over time that Snaps have ‘cos we built those things into Snaps* the problem […] but I also think we’re going to deliver a far better experience to developers and to users if we concentrate our efforts around something we really can move forward.* I respectfully disagree with Shuttlesworth's appraisal, because it seems to contradict the overall experience I've seen from open source projects that have a tendency to package flatpaks and appimages which in turn tend to be well-supported across many distributions. While I understand that it's very easy to set up flatpak in Ubuntu <https://flatpak.org/setup/Ubuntu> these days, I'm still disappointed that this is the stance Ubuntu is taking because I feel that this doesn't quite align with the interests of the Linux community at large who're trying to improve the desktop experience. I worry that the distribution that got me into Linux years ago will increase fragmentation as time goes on if it continues to focus on its walled garden approach. What do you think? - Josh
I think history has shown that the fragmentation will decrease. It would increase if Ubuntu continued to focus on its walled garden and succeeded but Ubuntu is not good at that. They pushed upstart but the community went with systemd. They pushed Mir but the community went with Wayland. They pushed Unity but dropped it for Gnome. Maybe they do better with smaller contributions to existing projects but their attempts to drive the linux community has largely failed. On Mon, 2022-04-25 at 13:42 -0400, Joshua Stone via WLUG wrote:
Last week marked the release of Ubuntu 22.04 LTS, and so far it looks like some noteworthy improvements over the last LTS, notably Wayland being enabled for Intel/AMD users by default!
There is no Wayland enablement for Nvidia users in 22.04 LTS though, and Pulseaudio is still the default audio server instead of the more modern pipewire+wireplumber that's been appearing in other distributions. It also appears that Firefox in 22.04 LTS is now installed as a snap at the request of Mozilla, and it drops the deb package from the official repo.
Also interesting is that Ubuntu will continue to avoid supporting flatpak in favor of snap, as explained by Mark Shuttleworth:
https://www.omgubuntu.co.uk/2022/04/ubuntu-wont-support-flatpak-anytime-soon
I can say right now Flatpak’s wouldn’t work for us. I don’t think they have the security story and I also don’t think they have the ability to deliver the same integrity of execution over time that Snaps have ‘cos we built those things into Snaps
I like the fact that people have a diversity of opinions on ways to solve the problem […] but I also think we’re going to deliver a far better experience to developers and to users if we concentrate our efforts around something we really can move forward.
I respectfully disagree with Shuttlesworth's appraisal, because it seems to contradict the overall experience I've seen from open source projects that have a tendency to package flatpaks and appimages which in turn tend to be well-supported across many distributions.
While I understand that it's very easy to set up flatpak in Ubuntu these days, I'm still disappointed that this is the stance Ubuntu is taking because I feel that this doesn't quite align with the interests of the Linux community at large who're trying to improve the desktop experience.
I worry that the distribution that got me into Linux years ago will increase fragmentation as time goes on if it continues to focus on its walled garden approach. What do you think?
- Josh _______________________________________________ WLUG mailing list -- wlug@lists.wlug.org To unsubscribe send an email to wlug-leave@lists.wlug.org Create Account: https://wlug.mailman3.com/accounts/signup/ Change Settings: https://wlug.mailman3.com/postorius/lists/wlug.lists.wlug.org/ Web Forum/Archive: https://wlug.mailman3.com/hyperkitty/list/wlug@lists.wlug.org/message/GZEMSF...
I have been chasing my tail trying to download a very large to my desktop. The bug is strange enough that it might be specific to hardware. The issue is that I can not copy very large files accurately. The threshold value size of the file seems to be about 8589934592. Any file this size or larger seems to be uncopyable on my desktop. I had downloaded a corrupted file of 71.9GB with Ubuntu 20.04, Ubuntu 22.04, Windows 10 virtua machine running under Ubuntu 20.04, Manjaro. All of theses OS;s were running on the same hardware. I had no issues downloading the file from a raspberry pi with and external hard drive. I have included the C file I used to create the files which also has some notes opn the results of the tests. Any advice or help would be appreciated.
What are the kernel version(s)? What filesystem? I suggest: - Try memtest86+ - Try a live Fedora 35 on a USB stick since all your other tests were relying on Ubuntu, even the Windows 10 which was a VM on Ubuntu. Fedora usually has much newer kernels, so if there is a Linux bug, it might be fixed already. On Thu, Apr 28, 2022 at 11:55:04AM -0400, Kevin Stratton via WLUG wrote:
I have been chasing my tail trying to download a very large to my desktop. The bug is strange enough that it might be specific to hardware.
The issue is that I can not copy very large files accurately. The threshold value size of the file seems to be about 8589934592. Any file this size or larger seems to be uncopyable on my desktop.
I had downloaded a corrupted file of 71.9GB with Ubuntu 20.04, Ubuntu 22.04, Windows 10 virtua machine running under Ubuntu 20.04, Manjaro. All of theses OS;s were running on the same hardware.
I had no issues downloading the file from a raspberry pi with and external hard drive.
I have included the C file I used to create the files which also has some notes opn the results of the tests.
Any advice or help would be appreciated.
Thank you for reminding me about memtest86, I had a bad memory module. I am trying to RMA it now. It seems unlikely that the memory would go bad after about a year. Live and learn.... Ubuntu just updated to 22.04 with the kernel 5.15.0-27-genericOn 4/28/2022 1:56 PM, Chuck Anderson wrote:
What are the kernel version(s)? What filesystem?
I suggest:
- Try memtest86+
- Try a live Fedora 35 on a USB stick since all your other tests were relying on Ubuntu, even the Windows 10 which was a VM on Ubuntu. Fedora usually has much newer kernels, so if there is a Linux bug, it might be fixed already.
On Thu, Apr 28, 2022 at 11:55:04AM -0400, Kevin Stratton via WLUG wrote:
I have been chasing my tail trying to download a very large to my desktop. The bug is strange enough that it might be specific to hardware.
The issue is that I can not copy very large files accurately. The threshold value size of the file seems to be about 8589934592. Any file this size or larger seems to be uncopyable on my desktop.
I had downloaded a corrupted file of 71.9GB with Ubuntu 20.04, Ubuntu 22.04, Windows 10 virtua machine running under Ubuntu 20.04, Manjaro. All of theses OS;s were running on the same hardware.
I had no issues downloading the file from a raspberry pi with and external hard drive.
I have included the C file I used to create the files which also has some notes opn the results of the tests.
Any advice or help would be appreciated.
As an aside, it's generally a good idea to use larger chunk sizes to make random data generation run faster. The Python example below can generate 4GB of random data in about 10s on my laptop: #!/usr/bin/env python3 import os CHUNK_SIZE = 2 ** 16 NUMBER = 0x100000000 # 2^32 copies without md5 detected error, file legnth 4294967296 #NUMBER = 0x400000000 # 4*2^32 copies are erroneous from md5sum, file legnth 17179869184 #NUMBER = 0x200000000 # 2*2^32 copies are erroneous from md5sum, file legnth 8589934592 #NUMBER = 0x100000001 # 2^32+1 copies without md5 detected error, file legnth 4294967297 #NUMBER = 0x180000000 # copies without md5 detected error, file legnth 6442450944 #NUMBER = 0x1FFFFFFFF def write_file(length, chunk_size): with open('bigfile', 'wb') as outfile: for _ in range(length // chunk_size): outfile.write(os.urandom(chunk_size)) def main(): write_file(NUMBER, CHUNK_SIZE) if __name__ == '__main__': main() On Thu, Apr 28, 2022 at 8:46 PM Kevin Stratton via WLUG <wlug@lists.wlug.org> wrote:
Thank you for reminding me about memtest86, I had a bad memory module. I am trying to RMA it now. It seems unlikely that the memory would go bad after about a year. Live and learn....
Ubuntu just updated to 22.04 with the kernel 5.15.0-27-genericOn 4/28/2022 1:56 PM, Chuck Anderson wrote:
What are the kernel version(s)? What filesystem?
I suggest:
- Try memtest86+
- Try a live Fedora 35 on a USB stick since all your other tests were relying on Ubuntu, even the Windows 10 which was a VM on Ubuntu. Fedora usually has much newer kernels, so if there is a Linux bug, it might be fixed already.
I have been chasing my tail trying to download a very large to my desktop. The bug is strange enough that it might be specific to hardware.
The issue is that I can not copy very large files accurately. The
On Thu, Apr 28, 2022 at 11:55:04AM -0400, Kevin Stratton via WLUG wrote: threshold value size of the file seems to be about 8589934592. Any file this size or larger seems to be uncopyable on my desktop.
I had downloaded a corrupted file of 71.9GB with Ubuntu 20.04, Ubuntu
22.04, Windows 10 virtua machine running under Ubuntu 20.04, Manjaro. All of theses OS;s were running on the same hardware.
I had no issues downloading the file from a raspberry pi with and
external hard drive.
I have included the C file I used to create the files which also has
some notes opn the results of the tests.
Any advice or help would be appreciated.
WLUG mailing list -- wlug@lists.wlug.org To unsubscribe send an email to wlug-leave@lists.wlug.org Create Account: https://wlug.mailman3.com/accounts/signup/ Change Settings: https://wlug.mailman3.com/postorius/lists/wlug.lists.wlug.org/ Web Forum/Archive: https://wlug.mailman3.com/hyperkitty/list/wlug@lists.wlug.org/message/H4Y3U2...
doug> I manually added Kevin back ....full blown (Kevin Stratton <kstratton@fastmail.us> style) on my reply header above. I just like seeing it. Or causing trouble. Ignore and/or X-out ifyou hit reply here..pardon the noise. THE BIG FILE debug. In general, first I think it's very cool appropriate to post an (easy-ish) C-code read/write task! I stripped comments, repasted below, pardon again: #define NUMBER 0x1FFFFFFFF // copies without md5 detected error,file len 8589934591 int main() { FILE * fp; uint32_t rand; uint64_t count; fp= fopen ("bigfile","w+"); for (count=0;count<NUMBER;count++) { rand = random(); fwrite(&rand,sizeof(random),1,fp); } fclose (fp); return 0; } dmildram/doug> First, try the memtest86+ , reply-to-list, and i hope we swap stories on memtest later after we figure out your fun puzzle, I definitely want to hear where you ran into (+found) trouble! BUT you didn't show the shell command (syntax) that YOU RAN...was it quiet output? (what was the output?) OR... did you get error messages or stop midway on you? I await your chapter 2.... as you said it could be HW, is semi-random AND i don't get your whole story yet (close...) --doug On Thu, Apr 28, 2022 at 11:55 AM Kevin Stratton via WLUG < wlug@lists.wlug.org> wrote:
I have been chasing my tail trying to download a very large to my desktop. The bug is strange enough that it might be specific to hardware.
The issue is that I can not copy very large files accurately. The threshold value size of the file seems to be about 8589934592. Any file this size or larger seems to be uncopyable on my desktop.
I had downloaded a corrupted file of 71.9GB with Ubuntu 20.04, Ubuntu 22.04, Windows 10 virtua machine running under Ubuntu 20.04, Manjaro. All of theses OS;s were running on the same hardware.
I had no issues downloading the file from a raspberry pi with and external hard drive.
I have included the C file I used to create the files which also has some notes opn the results of the tests.
Any advice or help would be appreciated.
_______________________________________________ WLUG mailing list -- wlug@lists.wlug.org To unsubscribe send an email to wlug-leave@lists.wlug.org Create Account: https://wlug.mailman3.com/accounts/signup/ Change Settings: https://wlug.mailman3.com/postorius/lists/wlug.lists.wlug.org/ Web Forum/Archive: https://wlug.mailman3.com/hyperkitty/list/wlug@lists.wlug.org/message/UQDQPF...
Thank you everyone for your attention to this. I was able to get a free (as in free beer) memtest86 installed on a USB drive. Memtest86 did lead me to a bad memory module. I am operating on 1/2 ram until I get a replacement ram module. Memtest86+ does not seem ready for a "secure boot" environment. When I did the copy I was talking about, I was in the folder that the file exists: I did a simple "cp file file2 "with no options. There was no error messages of any kind, the ram issue was silent, if I recall correctly the bad copied file was always the same size as the original. The issue lead to this was that I could not install a very large software package. I would have random issues that were related to a corrupted file. I wanted to use the minimum resources possible withe C code to eliminate confounding factors. I am currently downloading the file and I will try to install the application today. On 4/28/2022 4:44 PM, Doug Mildram wrote:
doug> I manually added Kevin back ....full blown (Kevin Stratton <kstratton@fastmail.us> style) on my reply header above. I just like seeing it. Or causing trouble. Ignore and/or X-out ifyou hit reply here..pardon the noise.
THE BIG FILE debug. In general, first I think it's very cool appropriate to post an (easy-ish) C-code read/write task! I stripped comments, repasted below, pardon again:
#define NUMBER 0x1FFFFFFFF // copies without md5 detected error,file len 8589934591 int main() { FILE * fp; uint32_t rand; uint64_t count;
fp= fopen ("bigfile","w+"); for (count=0;count<NUMBER;count++) { rand = random(); fwrite(&rand,sizeof(random),1,fp); } fclose (fp); return 0; }
dmildram/doug> First, try the memtest86+ , reply-to-list, and i hope we swap stories on memtest later after we figure out your fun puzzle, I definitely want to hear where you ran into (+found) trouble! BUT you didn't show the shell command (syntax) that YOU RAN...was it quiet output? (what was the output?) OR... did you get error messages or stop midway on you? I await your chapter 2.... as you said it could be HW, is semi-random AND i don't get your whole story yet (close...) --doug
On Thu, Apr 28, 2022 at 11:55 AM Kevin Stratton via WLUG <wlug@lists.wlug.org> wrote:
I have been chasing my tail trying to download a very large to my desktop. The bug is strange enough that it might be specific to hardware.
The issue is that I can not copy very large files accurately. The threshold value size of the file seems to be about 8589934592. Any file this size or larger seems to be uncopyable on my desktop.
I had downloaded a corrupted file of 71.9GB with Ubuntu 20.04, Ubuntu 22.04, Windows 10 virtua machine running under Ubuntu 20.04, Manjaro. All of theses OS;s were running on the same hardware.
I had no issues downloading the file from a raspberry pi with and external hard drive.
I have included the C file I used to create the files which also has some notes opn the results of the tests.
Any advice or help would be appreciated.
_______________________________________________ WLUG mailing list -- wlug@lists.wlug.org To unsubscribe send an email to wlug-leave@lists.wlug.org Create Account: https://wlug.mailman3.com/accounts/signup/ Change Settings: https://wlug.mailman3.com/postorius/lists/wlug.lists.wlug.org/ Web Forum/Archive: https://wlug.mailman3.com/hyperkitty/list/wlug@lists.wlug.org/message/UQDQPF...
Yeah I was thinking about how to minimize resource usage, as Python has a much larger runtime compared to a compiled-compiled program. I've ported the Python program to Rust: https://gist.github.com/joshua-stone/576e5826302a720331a2686f25c957f1 When compiled with MUSL, LTO, and static-linking, then finally run through `strip`, it'll produce a 391KB binary with no external libraries, not even glibc! I'm sure plenty can be done to further optimize memory footprint, but this has been a fun exercise already. On Fri, Apr 29, 2022 at 11:05 AM Kevin Stratton via WLUG < wlug@lists.wlug.org> wrote:
Thank you everyone for your attention to this.
I was able to get a free (as in free beer) memtest86 installed on a USB drive. Memtest86 did lead me to a bad memory module. I am operating on 1/2 ram until I get a replacement ram module. Memtest86+ does not seem ready for a "secure boot" environment.
When I did the copy I was talking about, I was in the folder that the file exists: I did a simple "cp file file2 "with no options. There was no error messages of any kind, the ram issue was silent, if I recall correctly the bad copied file was always the same size as the original. The issue lead to this was that I could not install a very large software package. I would have random issues that were related to a corrupted file.
I wanted to use the minimum resources possible withe C code to eliminate confounding factors.
I am currently downloading the file and I will try to install the application today.
On 4/28/2022 4:44 PM, Doug Mildram wrote:
doug> I manually added Kevin back ....full blown (Kevin Stratton <kstratton@fastmail.us> style) on my reply header above. I just like seeing it. Or causing trouble. Ignore and/or X-out ifyou hit reply here..pardon the noise.
THE BIG FILE debug. In general, first I think it's very cool appropriate to post an (easy-ish) C-code read/write task! I stripped comments, repasted below, pardon again:
#define NUMBER 0x1FFFFFFFF // copies without md5 detected error,file len 8589934591 int main() { FILE * fp; uint32_t rand; uint64_t count;
fp= fopen ("bigfile","w+"); for (count=0;count<NUMBER;count++) { rand = random(); fwrite(&rand,sizeof(random),1,fp); } fclose (fp); return 0; }
dmildram/doug> First, try the memtest86+ , reply-to-list, and i hope we swap stories on memtest later after we figure out your fun puzzle, I definitely want to hear where you ran into (+found) trouble! BUT you didn't show the shell command (syntax) that YOU RAN...was it quiet output? (what was the output?) OR... did you get error messages or stop midway on you? I await your chapter 2.... as you said it could be HW, is semi-random AND i don't get your whole story yet (close...) --doug
On Thu, Apr 28, 2022 at 11:55 AM Kevin Stratton via WLUG < wlug@lists.wlug.org> wrote:
I have been chasing my tail trying to download a very large to my desktop. The bug is strange enough that it might be specific to hardware.
The issue is that I can not copy very large files accurately. The threshold value size of the file seems to be about 8589934592. Any file this size or larger seems to be uncopyable on my desktop.
I had downloaded a corrupted file of 71.9GB with Ubuntu 20.04, Ubuntu 22.04, Windows 10 virtua machine running under Ubuntu 20.04, Manjaro. All of theses OS;s were running on the same hardware.
I had no issues downloading the file from a raspberry pi with and external hard drive.
I have included the C file I used to create the files which also has some notes opn the results of the tests.
Any advice or help would be appreciated.
_______________________________________________ WLUG mailing list -- wlug@lists.wlug.org To unsubscribe send an email to wlug-leave@lists.wlug.org Create Account: https://wlug.mailman3.com/accounts/signup/ Change Settings: https://wlug.mailman3.com/postorius/lists/wlug.lists.wlug.org/ Web Forum/Archive: https://wlug.mailman3.com/hyperkitty/list/wlug@lists.wlug.org/message/UQDQPF...
_______________________________________________ WLUG mailing list -- wlug@lists.wlug.org To unsubscribe send an email to wlug-leave@lists.wlug.org Create Account: https://wlug.mailman3.com/accounts/signup/ Change Settings: https://wlug.mailman3.com/postorius/lists/wlug.lists.wlug.org/ Web Forum/Archive: https://wlug.mailman3.com/hyperkitty/list/wlug@lists.wlug.org/message/IHIQQR...
Why not just use "dd": dd if=/dev/urandom bs=104857600 count=82 of=bigfile On Fri, Apr 29, 2022 at 01:03:19PM -0400, Joshua Stone via WLUG wrote:
Yeah I was thinking about how to minimize resource usage, as Python has a much larger runtime compared to a compiled-compiled program.
I've ported the Python program to Rust:
https://gist.github.com/joshua-stone/576e5826302a720331a2686f25c957f1
When compiled with MUSL, LTO, and static-linking, then finally run through `strip`, it'll produce a 391KB binary with no external libraries, not even glibc!
I'm sure plenty can be done to further optimize memory footprint, but this has been a fun exercise already.
On Fri, Apr 29, 2022 at 11:05 AM Kevin Stratton via WLUG < wlug@lists.wlug.org> wrote:
Thank you everyone for your attention to this.
I was able to get a free (as in free beer) memtest86 installed on a USB drive. Memtest86 did lead me to a bad memory module. I am operating on 1/2 ram until I get a replacement ram module. Memtest86+ does not seem ready for a "secure boot" environment.
When I did the copy I was talking about, I was in the folder that the file exists: I did a simple "cp file file2 "with no options. There was no error messages of any kind, the ram issue was silent, if I recall correctly the bad copied file was always the same size as the original. The issue lead to this was that I could not install a very large software package. I would have random issues that were related to a corrupted file.
I wanted to use the minimum resources possible withe C code to eliminate confounding factors.
I am currently downloading the file and I will try to install the application today.
If I understand this one liner correctly, dd is transferring 82 blocks of size 104857600. My original testing was: sudo /dev/random > bigfile And I monitored the size of the file with ls -l in a different terminal. Then I CTRL-C to stop the transfer. LInux is not a major part of my vocation (today?) At risk of being a "broken record", my problem has been solved by fixing my ram. I can now work with a 71.8GB zip with no issues. On 4/29/2022 6:05 PM, Chuck Anderson wrote:
Why not just use "dd":
dd if=/dev/urandom bs=104857600 count=82 of=bigfile
On Fri, Apr 29, 2022 at 01:03:19PM -0400, Joshua Stone via WLUG wrote:
Yeah I was thinking about how to minimize resource usage, as Python has a much larger runtime compared to a compiled-compiled program.
I've ported the Python program to Rust:
https://gist.github.com/joshua-stone/576e5826302a720331a2686f25c957f1
When compiled with MUSL, LTO, and static-linking, then finally run through `strip`, it'll produce a 391KB binary with no external libraries, not even glibc!
I'm sure plenty can be done to further optimize memory footprint, but this has been a fun exercise already.
On Fri, Apr 29, 2022 at 11:05 AM Kevin Stratton via WLUG < wlug@lists.wlug.org> wrote:
Thank you everyone for your attention to this.
I was able to get a free (as in free beer) memtest86 installed on a USB drive. Memtest86 did lead me to a bad memory module. I am operating on 1/2 ram until I get a replacement ram module. Memtest86+ does not seem ready for a "secure boot" environment.
When I did the copy I was talking about, I was in the folder that the file exists: I did a simple "cp file file2 "with no options. There was no error messages of any kind, the ram issue was silent, if I recall correctly the bad copied file was always the same size as the original. The issue lead to this was that I could not install a very large software package. I would have random issues that were related to a corrupted file.
I wanted to use the minimum resources possible withe C code to eliminate confounding factors.
I am currently downloading the file and I will try to install the application today.
participants (5)
-
Chuck Anderson
-
Dennis Payne
-
Doug Mildram
-
Joshua Stone
-
Kevin Stratton