Posts by garlicknots

    This occurred today. I was able to use the webserver to move the object to the destination successfully. It's got something to do with the object in memory in the ExportConverter.


    I wonder if this could be resolved by adding some sort of a time buffer on the ImportConverter so that cine clips aren't released to export converters for X number of seconds.

    We have 4 nodes in a balanced cluster each with their own DB. Would the changeUID function result in differing UIDs on each environment if repeated sends were balanced to other nodes?


    We haven't really considered using shared storage or a shared db for these systems, but doing so could possibly help with tasks like this.

    Hi Marcel


    We have this method in place because we wanted to ensure the UID generation was unique per exam and not per send. With newuids, someone could send the same dataset repeatedly and duplicate it, couldn't they?


    When you mention inconsistent dataset, where would the inconsistency be? The filesystem / the database?


    The edit occurs on a single object, creating a new series. I am not clear on how an inconsistency with the UIDs can exist. This runs on an ImportConverter, so it should all be pre-filesystem & db, right?

    Forgot to post resolution about this. This was resolved by removing a trailing & from the stats script.


    export reporter:

    curl -i -XPOST 'http://0.0.0.0:8086/write?db=conquest_stats' --data-binary "conquest_exports,hostname=`hostname -s`,destinationae=$1,modality=$2,calledae=$3,state=$4 callingae=\"$5\",sopuid=\"$6\",mrn=\"$7\",accession=\"$8\""


    import reporter
    curl -i -XPOST 'http://0.0.0.0:8086/write?db=conquest_stats' --data-binary "conquest_imports,hostname=`hostname -s`,modality=$1,calledae=$2 callingae=\"$3\",sopuid=\"$4\",mrn=\"$5\",accession=\"$6\""

    We have an issue with our diagnostic viewer which causes cine clips to sometimes to blend in the stack of stills when we'd like to have them easily identifiable as cine clips.


    To split these clips, we built a lua which generates a new (and consistent) series instanceuid and changes the series description to CINE. This fires properly, but at times it writes an object which will not successfully export. This is not consistent. I do not know how to make it fail and I do not have access to a study which will cause this issue. We see it several times a month.


    I noticed yesterday that when this file is written, it's a DCM file instead of a v2.

    We use FileNameSyntax=3 so I'd expect to see a v2 object.


    Here is the lua:


    Got it, thanks.


    Hey marcelvanherk I've got a problem for ya.


    We had an exam arrive today that had 1024 characters in ReasonForStudy (0032,1030) which choked our Exports. We've been slowly building a DICOM normalizer lua that we run against all images that come in to clean up VL mismatches that our long-term archive does not like and ReasonForStudy is one of the tags we fire against.


    I spent a little while tonight trying to figure out if we had an issue in our lua and after some trial and error I've found that if a DICOM tag is greater than 255 characters, the comparison will not fire. What can be done to extend that char length limitation?


    Here's a snip of the lua


    Code
    1. for i, TAG in pairs(VL64) do
    2. if Data[TAG] and Data[TAG]:len() > 64 then
    3. Data[TAG] = string.sub(Data[TAG], 1, 64)
    4. print ('ic-dicomnormalizer.lua UPACS THREAD ' .. Association.Thread .. ': has truncated: ' .. Data[TAG] ..' to 64 characters')
    5. end
    6. end

    Definitely can't go without retries. We've given that an attempt in the past but there are too many erroneous failures to rely on that. The retry firing on the entire converter is helpful for a few reasons, we have the converters notifying an influxdb database of attempts so we can monitor through grafana. Seeing in grafana when something is trying repeatedly is one of our countermeasures to manage 'clogs.'


    Moving to 1.5.0 is something we could consider. We've not moved to 1.4.19d yet so an upgrade is somewhat due. When do you foresee 1.5.0 releasing?

    We are using CQ for many workflows and enforce sending retries when transfers fail. The behavior we see displayed is that a failed send will retry indefinitely until success and will not allow any other objects to send through the EC until it is handled as the EC expects.


    Due to this, we are separating workflows into different ECs so that they do not impact one-another when there is a transfer problem. We're getting closer and closer to the documented limit of 20 ECs. Can this be extended and/or is there a way to review the retry behavior so the ECs do not completely halt when there are transfer problems?

    Hello,


    My site is looking to try out DICOM TLS but we don't seem to have a software solution which supports it natively. marcelvanherk is this feature possibly coming in a future release?


    Is there another way I could may this function without native support within dgate?

    Mem increase has had no effect. We observed this issue again today.


    What's so curious is this effects ExportConverters themselves. We have more than one ExportConverter sending to the same destination and one will fire successfully while the other does not. Stop/Start dgate and everything is fine again.