It's A Digital Disease!

23 readers
1 users here now

This is a sub that aims at bringing data hoarders together to share their passion with like minded people.

founded 2 years ago
MODERATORS
5501
 
 
The original post: /r/datahoarder by /u/Neocitizen2077 on 2024-12-08 22:11:32.

I had tons of live photos on my phone, but with all the pictures I take, my phone's storage fills up fast. Moving the live photos to a USB drive or computer turned them into static images, and the live feature was gone...My solution is to save the live photos in a NAS album that supports live photo backups. Lots of NAS can do this now, like Synology, Ugreen, etc. Using the NAS's mobile app, you can not only back up automatically but also access them remotely. And the downloaded images will still be Apple's original live photos!

https://preview.redd.it/mpkd84yq7p5e1.png?width=1170&format=png&auto=webp&s=fe095772f5c8645085e2482b8f453f524a2e56bc

https://preview.redd.it/00re2igs7p5e1.jpg?width=1170&format=pjpg&auto=webp&s=3b28dcfbf14d7622024e197ce061003bc5bd1930

https://preview.redd.it/yf3280gt7p5e1.png?width=1170&format=png&auto=webp&s=4894144d12458a24d3a58dda64a32b3729bbf4ac

5502
 
 
The original post: /r/datahoarder by /u/ehead on 2024-12-08 22:04:49.

I'm looking at a JBOD enclosure and they have a USB 3.0 and a 3.2 version, the later costing $100 more.

If I put 6 SATA 3 disks in this enclosure... I guess they all share the same USB interface, so while SATA 3 is maybe 30% faster than USB 3.0, with 6 SATA disks I'm guessing I would see a pretty big performance boost from the 3.2 unit? Particularly when doing something like a snapraid sync?

5503
 
 
The original post: /r/datahoarder by /u/shanefergusonphoto on 2024-12-08 21:31:36.

Hiya All,

I’m using a Mac - Within GoodSync I can create a list of jobs and set each of them to run on a schedule. For example, the 1st job at 1am, the 2nd at 2am and so on.

However, if there is nothing to backup/sync the next job is waiting for up to an hour (or whatever the scheduled time is) to run.

I see that I can create a Job Group, and if I manually run this - each job runs in turn once the previous one has finished - no more waiting.

However, I can’t seem to find an option to schedule the Group to begin, like I could with individual jobs.

Does anyone know if/how this can be achieved?

Thanks :)

5504
 
 
The original post: /r/datahoarder by /u/Nexztop on 2024-12-08 20:32:39.

Does anyone know of any S3 compatible (or at least that I can connect with rclone) storage (like Backblaze, Wasabi and etc) with servers on the US East (Northeast)? The one with best internet connection so far is Storj, I may stay with them but still wish to see other options.

Just in case, I prefer a scalable option since i dont use more than 400 GBs but im open to options

I just got into all of this object storage stuff and it is very interesting.

The options I've tried so far are:

Backblaze B2 Wasabi Telnyx Storj

If anyone has any others I could try please let me know

5505
 
 
The original post: /r/datahoarder by /u/spritz_bubbles on 2024-12-08 20:29:01.

I already had to go through a data collecting process that was tedious and stressful from August to October. I lost hair and my immune system has had issues ever since. This platform was my dream passion and it took weeks of non stop backing up. Either way a lot of my memories and hard work was destroyed.

Now with The US being paranoid, with the looming ban of TikTok, it looks as though the “downloading all account data” option on TikTok is a front just like on meta, where you get zipped folders of outdated primitive code - not your actual content. Nothing user friendly to say the least.

Is there a way to access my content after the ban? I am unfamiliar with VPNS, but if I get one, will I have to make a whole new account? Or will I still be able to access my account and its data with a VPN?

We’re being told that if the ban goes through, you “might be able to access your own uploaded TikTok’s” but the app will die within the following couple months because there won’t be updates.

This is really shitty to do to people. 170 million US citizens having their content destroyed is so fucked.

I know this topic could open many rabbit holes on why tiktok has flaws, I get it. That’s why I stopped using it as much after its initial hype.

But I really just want someone to help me find a way to access my data and content and answer my questions about if user data will be accessible after January 19th? And if so for how long? Are there ways to download all your data easily without manually doing it?

This is an ocd nightmare.

5506
 
 
The original post: /r/datahoarder by /u/Critical-Ad7413 on 2024-12-08 19:50:06.

Are there any tests or benchmarks comparing an old server cpu like a 26xx xeon vs newer consumer model like a 12600k with their typical alotment of RAM? I understand that in raw compute power the new one crushes it but I am curious how other factors affect it? It seems pretty reasonable to get a TB of RAM on the xeons vs the typical max of 128GB that most consumer boards support, plus the server chips have way more pcie lanes. I don't know much the added ram and pcie lanes help handling a server with several dozen drives.

Does a SSD cache makeup for the difference in RAM?

**Not asking about transcoding, simply in file serving perfomance.

5507
 
 
The original post: /r/datahoarder by /u/lorovesgo on 2024-12-08 19:30:18.

im using lazesoft disk image & cloning

im trying to clone my HD to my SSD, but every time i try, it gets stuck at 18%, why? IDFK, does anyone know if the copying is still active, and after a while its just gonna jump to 100%? or do i just give up and use something else? i was plaing on using clonezilla, but it just looks ugly and slower, and i heard people have the same problems on it, could this be a problem wit the HD or SSD? a outdated Driver? faulty SATA cable? im so pissed this stupid process has already taken 2 days, any help is welcome, thank you.

5508
 
 
The original post: /r/datahoarder by /u/Fancy_mantis_4371 on 2024-12-08 19:02:16.
5509
 
 
The original post: /r/datahoarder by /u/UEG-Starhunter on 2024-12-08 15:15:34.

Bricked my adata sdd. What should I get to replace it?

5510
 
 
The original post: /r/datahoarder by /u/iolitm on 2024-12-08 07:37:36.

I have Google Drive, pCloud, and Dropbox. None take this long. I searched for the history on Reddit and it seems to be a pattern. IceDrive is slow. We should stop recommending this if they are this slow.

No VPN use. I have high speed Internet with the fastest speed, upload, in my city in the US.

I have the basic 1TB plan. Yearly.

And hopefully nobody is buying these companies lifetime plans unless they have rock solid reputation.

Dissapointed with my new IceDrive experience.

https://preview.redd.it/rkvnrsm0wk5e1.png?width=713&format=png&auto=webp&s=ce3c78ee69655aa07c2809b14351fbaba9ecc7f0

5511
 
 
The original post: /r/datahoarder by /u/Wild_Warning3716 on 2024-12-08 19:46:02.

My network is 2.5gbe, so looking for a device that will support that. I would like to have zfs with parity for one drive failure, or raid 5. In terms of usage and diskspace I am looking to get between 3-4tb total - so not a ton. It would be used primarily for documents and photos.

option 1 i am looking at is a pocketnas - 4xnvme setup cost 150-200 + drives

https://liliputing.com/cwwk-x86-p5-pocket-nas-is-a-cheap-tiny-mini-pc-with-dual-2-5-gbe-lan-ports-and-up-four-m-2-2280-slots/

option 2 asustor 4bay cost 250-400 + drives

https://www.asustor.com/product/spec?p_id=62

Asustor seems like easier setup, but if i wanted to do zfs i would be looking at installing an alternative os and then it may be harder than doing so on barebone system.

pocketnas i would need a small boot disk to fit in it, and would be more self managed, but should meet my needs.

anyone with experience with these two units

5512
 
 
The original post: /r/datahoarder by /u/xxITERUxx on 2024-12-08 19:38:35.

So I have this old HDD that recently got a hit on Reallocated Sector Count. I was starting to back up the data to a new drive when it started to cause my PC to freeze. Only way to fix it is to unplug the old drive. I tried Disk Drill to recover the data but even that is freezing whenever I plug in the old drive. Any other way I can still recover the data?

5513
 
 
The original post: /r/datahoarder by /u/AlbertoNobilePh on 2024-12-08 19:07:26.

Hi all! I'm looking for a way to organize and consume my 2 TB collection of e-learning courses.

My need is a free and simple software that can basically read all media formats (primarily mp4, pdf, docx) and keep track of watched/played status of videos.

Any idea? I've tried Jellyfin, but I've not been able to show videos and pdfs in the same Library.

Thanks for your help!

5514
 
 
The original post: /r/datahoarder by /u/Repulsive_Market_728 on 2024-12-08 18:42:13.

Not sure if anyone here is interested, but I thought I'd post just in case. Up for auction on a Government surplus site is a Qualstar LTO tape backup system. It looks to be somewhere in the RLS-8000 series. I don't have anyplace to put anything like this in my house right now, but someone here might be interested. Auction ends tomorrow (Dec 9th) at around 11 am eastern time.

This is the closest I found to the images.

Here is a YouTube video.

5515
 
 
The original post: /r/datahoarder by /u/Stinka1134 on 2024-12-08 17:45:55.

Ima just download all my saved stuff from TikTok if I’m the case it gets banned lmao. Ima start doing the same for anime and possible download a whole LAPTOP back up into it. Anything I should save

5516
 
 
The original post: /r/datahoarder by /u/RacerKaiser on 2024-12-08 16:01:24.

There's so much content out there, gallery-dl, yt-dlp, discordchatexporter and countless other software has helped me archive more efficiently.

However.

Recently I have been struggling to find a good way to archive a tumblr blog since apparently gallery-dl fails at multi image posts(not just me, in the github there are mentions of it.)

I have found that fiddling around with software may have taken longer than just right click and download the images on the page.

What is your take on this?

5517
 
 
The original post: /r/datahoarder by /u/ojuditho on 2024-12-08 14:37:08.

About a year and a half ago, I purchased an ORICO 5 Bay Raid Enclosure (3559RU3), which I filled with 3 Seagate 16TN Exos (st16000nm001g) set up as a Raid 5. I just purchased a Seagate IronWolf Pro 20TB (ST20000NT001) that I would like to add to it.

The whole set up is being used as both 30 years of data storage, and also a Plex server. I know I should have probably gotten a NAS, but I didn't, so I'm doing my best working with what I got.

I am very worried that I'm going to screw something up and lose 29 TB of data, so before I do anything, I wanted to ask a few questions.

My questions are:

  1. Is it possible add the drive to the RAID 5 configuration, without having to wipe everything out and start over?

1b) And if, that is possible, is that a smart idea?

  1. If the answer to either of the above is no, is there anything that I need to do to just add the 4th drive to the bay to act as a "2nd" drive?

  2. Since this is being used as a Plex server, would it make more sense to transfer all the data from the 3 drive RAID to the new drive, then reformat the RAID drives to individual and reclaim the redundant drive for more space? (This option would result in no backup)

Sorry if these seem like simple questions... I set this up a while ago and I don't remember exactly how or what I did, and don't want to mess anything up. I appreciate any help!

5518
 
 
The original post: /r/datahoarder by /u/dakazze on 2024-12-08 14:09:50.

Funny enough I just recently built a new home server and NAS to get all of my data out of the cloud (only for cost and fun, no ideological issues) and quit all of my cloud subscriptions. Now all of my data (roughly 8TB) is stored locally (ZFS + offsite backup) and I am using an automated sync of my most important work data to gdrive which is only about 20 GB.

Now with my new phone came a free upgrade to 2 TB for one year and as a passionate hoarder I am having a hard time with the thought of leaving all of this space unused.

Got any ideas?

5519
 
 
The original post: /r/datahoarder by /u/Hypochondirac101 on 2024-12-07 13:45:25.

Growing up, we had several home videos stored on VHS, which we then transferred to DVD and are now saved on the cloud.

I want to start home videos for my family. I know it’s silly, but I don’t want to use my iPhone. I want to use some sort of camcorder. Most camcorders use SD cards to store the videos.

My question is this, after the videos are stored to the SD card, where is the best way to save these files long term so that my kids can accessibly watch them when they are older? Do I transfer them from the SD card to either (or both) an external hard drive or some sort of cloud storage? Do I need a computer for that or would I be able to somehow directly transfer from SD card to the place where I need them stored long term? If that makes sense. Sorry for the rambling. Or should I just keep them on SD cards and use the camcorder and when my kids want to watch them on the TV, hook the camcorder to the TV?

If they are stored on the cloud or external hard drive, how can we watch them on the TV when we want to?

Thanks in advance!

5520
 
 
The original post: /r/datahoarder by /u/Confused_n_tired on 2024-12-07 13:29:10.

As a noob, I didn't follow 3-2-1 rule and lost all my data saved on my external HDD.

Most of the data I have is photos and videos.

As a total noob what option should I go for

External HDD/SSD: get 2 of HDD and hope for the best! I have to travel with it so I was thinking of switching one of them to SSD.

Pro: Cheap, Simple Con: can give up on me any moment just like my ex(harddrive)

Off the shelf NAS: A 2 bay Synology is enough for me. (please let me know if there are equally good options to this brand cz they aren't cheap)

Pro: Personal Cloud, Great against disk failure Con: Cost

DIY PiNas: Seems like a great cheap way to setup Nas. I'm a fish out of water when it comes to DIY computers, but you gotta start somewhere!

Pro: Cheapest Cloud. Can be personalised Con: will have no clue about anything and everything.

Tldr: External HDD or NAS(OTS or Diy) for a complete noob. which a good option

5521
 
 
The original post: /r/datahoarder by /u/Cmdr_Nemo on 2024-12-07 12:56:00.

I've been using this SSD for awhile. It sees fairly constant write/re-write. I back it up semi-regularly.

Does the "corruption" also copy over to the backup drive to the point where it may mess up the backup drive as well?

I've noticed some issues with the primary drive that I can't seem to fix:

  • "Phantom" folders of existing folders are showing up in the list of folders on the drive. They all contain the same files but are inaccessible and I am unable to delete/move them.
  • Some files are not getting deleted properly. I use "everything.exe" to search files and delete what I need to. For whatever reason, some files, when I hit delete, do not delete. I am still able to open them. Then when I try to open them in their folder, I am unable to find the file.

Sorry if the question is dumb... like I would think the answer would be no but I just want to be sure.

5522
 
 
The original post: /r/datahoarder by /u/DominusTheMerciful on 2024-12-07 12:28:48.

Edit: They corrected me it is an HDD. However, I cannot change the title.

I am experiencing significant issues with my external HDD. Below is a detailed account of the problems I have encountered:

Background

• I have used this external HDD to back up various files and, on some occasions, the entire C:/ drive of my computer before formatting it.

• Recently, as the HDD’s storage capacity became nearly full, I decided to delete some files to free up space.

• While deleting files, I attempted to remove the “Program Files” directory from a previous C:/ backup. The system warned me that removing it might cause issues, but I proceeded, assuming it wouldn’t affect an external drive.

Current Problems

1.  Access Issues:
• When I connect the SSD to my computer, it is recognized as drive E:, but the moment I attempt to access it, File Explorer freezes.

• The system becomes unresponsive and doesn’t allow me to eject the HDD safely.

• I cannot shut down my computer unless I physically unplug the HDD.

2.  Corrupted Data:
• After deleting the “Program Files” folder, other files and directories on the HDD became inaccessible.

• The HDD seems to have suffered some kind of corruption as a result of the deletion.

Troubleshooting Attempts

• I tried to access the drive using standard methods, but the computer either froze or couldn’t process the request.

• Safe ejection is not possible, as the system treats the drive as perpetually in use.

• I suspect the issue might be related to:

• File system corruption (possibly NTFS metadata damage).

• Overfilled storage leading to write/read errors.

• Hardware failure such as bad blocks or controller issues.

• Problems caused by deleting critical system-level folders like “Program Files.”

Assistance Needed

I am looking for recommendations on:

1.  How to safely access and recover data from the HDD without causing further corruption.
2.  Potential diagnostics to determine if the issue is hardware-related or tied to the file system.
3.  The best tools or professional services for recovering data in this scenario.

Thank you for your help!

5523
 
 
The original post: /r/datahoarder by /u/gargravarr2112 on 2024-12-07 12:24:10.

Hey folks. I've got a pretty big tape setup with 2 autoloaders and a couple of standalone drives. I'm using Bacula primarily; I have a 17TB backup of my Plex/TA library pending. However, Bacula keeps deciding that the tape is full long, long before then - sometimes after a few hundred MB written. So I've been working through the tapes I used for that backup and running an mt erase on them to try to get Bacula to realise these tapes have 2.5TB of usable space.

However, while most of them take hours to erase (indicating a full pass of the whole tape), a few are done in minutes or even seconds, even with a second pass. I suspect there's some correlation between this and Bacula not using the whole tape. Does anyone know what this means and if the tapes are actually usable? Or does this indicate damaged tapes?

5524
 
 
The original post: /r/datahoarder by /u/EspExpertt on 2024-12-07 10:48:06.

a few videos has this subtitles like in these videos, with its movement, colors etc. Is there an option for yt dlp or some software that keeps it that way? turn on subtitles a few examples: https://www.youtube.com/watch?v=5EKquLnbo0k , https://www.youtube.com/watch?v=gNn9NxZH2Vo Thanks.

5525
 
 
The original post: /r/datahoarder by /u/FabadaLosDomingos on 2024-12-06 19:30:22.

I just bought a 16TB HDD from seagate (from amazon). The drive is brand new, I had a problem mounting it and when i ran a healthcheck and realised I have 0 idea what any of it means. Does this mean that I formatted my drive incorrectly (ext4) or maybe that I should overriwrite the whole drive?

server@linux:~$ sudo smartctl -a /dev/sda
smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.10.11-200.fc40.x86_64] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Device Model:     ST16000NM000J-2TW103
Serial Number:    ZRS1FQXE
LU WWN Device Id: 5 000c50 0e899264f
Firmware Version: SN02
User Capacity:    16,000,900,661,248 bytes [16.0 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    7200 rpm
Form Factor:      3.5 inches
Device is:        Not in smartctl database 7.3/5528
ATA Version is:   ACS-4 (minor revision not indicated)
SATA Version is:  SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Fri Dec  6 20:25:51 2024 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x82)Offline data collection activity
was completed without error.
Auto Offline Data Collection: Enabled.
Self-test execution status:      (   0)The previous self-test routine completed
without error or no self-test has ever 
been run.
Total time to complete Offline 
data collection: (  567) seconds.
Offline data collection
capabilities:  (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities:            (0x0003)Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability:        (0x01)Error logging supported.
General Purpose Logging supported.
Short self-test routine 
recommended polling time:  (   1) minutes.
Extended self-test routine
recommended polling time:  (1424) minutes.
Conveyance self-test routine
recommended polling time:  (   2) minutes.
SCT capabilities:        (0x70bd)SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   050   050   044    Pre-fail  Always       -       36361577
  3 Spin_Up_Time            0x0003   098   098   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       2
  5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       28
  7 Seek_Error_Rate         0x000f   100   253   045    Pre-fail  Always       -       46649
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       1
 10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       2
 18 Unknown_Attribute       0x000b   100   100   050    Pre-fail  Always       -       0
187 Reported_Uncorrect      0x0032   001   001   000    Old_age   Always       -       108
188 Command_Timeout         0x0032   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0022   069   066   000    Old_age   Always       -       31 (Min/Max 27/34)
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       2
193 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       8
194 Temperature_Celsius     0x0022   031   040   000    Old_age   Always       -       31 (0 23 0 0 0)
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       98
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       98
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0023   100   100   001    Pre-fail  Always       -       0
240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       0 (65 187 0)
241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       36146800
242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       214777

SMART Error Log Version: 1
ATA Error Count: 195 (device log contains only the most recent five errors)
CR = Command Register [HEX]
FR = Features Register [HEX]
SC = Sector Count Register [HEX]
SN = Sector Number Register [HEX]
CL = Cylinder Low Register [HEX]
CH = Cylinder High Register [HEX]
DH = Device/Head Register [HEX]
DC = Device Command Register [HEX]
ER = Error register [HEX]
ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.

Error 195 occurred at disk power-on lifetime: 1 hours (0 days + 1 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 53 00 08 c9 00 00  Error: UNC at LBA = 0x0000c908 = 51464

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  25 00 01 08 c9 00 40 00      00:51:21.067  READ DMA EXT
  25 00 40 00 c9 00 40 00      00:51:16.658  READ DMA EXT
  25 00 01 08 c9 00 40 00      00:51:13.611  READ DMA EXT
  25 00 40 00 c9 00 40 00      00:51:10.587  READ DMA EXT
  25 00 40 00 49 00 40 00      00:51:09.722  READ DMA EXT

Error 194 occurred at disk power-on lifetime: 1 hours (0 days + 1 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 53 00 08 c9 00 00  Error: UNC at LBA = 0x0000c908 = 51464

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  25 00 40 00 c9 00 40 00      00:51:16.658  READ DMA EXT
  25 00 01 08 c9 00 40 00      00:51:13.611  READ DMA EXT
  25 00 40 00 c9 00 40 00      00:51:10.587  READ DMA EXT
  25 00 40 00 49 00 40 00      00:51:09.722  READ DMA EXT
  25 00 40 40 08 00 40 00      00:51:08.859  READ DMA EXT

Error 193 occurred at disk power-on lifetime: 1 hours (0 days + 1 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 53 00 08 c9 00 00  Error: UNC at LBA = 0x0000c908 = 51464

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  25 00 01 08 c9 00 40 00      00:51:13.611  READ DMA EXT
  25 00 40 00 c9 00 40 00      00:51:10.587  READ DMA EXT
  25 00 40 00 49 00 40 00      00:51:09.722  READ DMA EXT
  25 00 40 40 08 00 40 00      00:51:08.859  READ DMA EXT
  25 00 40 00 08 00 40 00      00:51:08.212  READ DMA EXT

Error 192 occurred at disk power-on lifetime: 1 hours (0 days + 1 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 53 00 08 c9 00 00  Error: UNC at LBA = 0x0000c908 = 51464

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  25 00 40 00 c9 00 40 00      00:51:10.587  READ DMA EXT
  25 00 40 00 49 00 40 00      00:51:09.722  READ DMA EXT
  25 00 40 40 08 00 40 00      00:51:08.859  READ DMA EXT
  25 00 40 00 08 00 40 00      00:51:08.212  READ DMA EXT
  25 00 40 80 00 00 40 00      00:51:08.210  READ DMA EXT

Error 191 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 53 00 68 14 00 00  Error: UNC at LBA = 0x00001468 = 5224

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  60 00 08 68 14 00 40 00      00:35:01.663  READ FPDMA QUEUED
  60 00 08 60 14 00 40 00      00:35:01.645  READ FPDMA QUEUED
  60 00 08 58 14 00 40 00      00:35:01.423  READ FPDMA QUEUED
  60 00 08 50 14 00 40 00      00:35:01.369  READ FPDMA QUEUED
  60 00 08 40 14 00 40 00      00:35:01.166  READ FPDMA QUEUED

SMART Self-test log structure revision number 1
No self-tests have been logged.  [To run self-tests, use: smartctl -t]

SMART Selective self-test log data structure revision number 1
 S...
***
Content cut off. Read original on https://old.reddit.com/r/DataHoarder/comments/1h89kfn/issue_with_new_bought_hdd_16tb_seagate/
view more: ‹ prev next ›