Posts by garlicknots

    Got it, thanks.

    Hey marcelvanherk I've got a problem for ya.

    We had an exam arrive today that had 1024 characters in ReasonForStudy (0032,1030) which choked our Exports. We've been slowly building a DICOM normalizer lua that we run against all images that come in to clean up VL mismatches that our long-term archive does not like and ReasonForStudy is one of the tags we fire against.

    I spent a little while tonight trying to figure out if we had an issue in our lua and after some trial and error I've found that if a DICOM tag is greater than 255 characters, the comparison will not fire. What can be done to extend that char length limitation?

    Here's a snip of the lua

    1. for i, TAG in pairs(VL64) do
    2. if Data[TAG] and Data[TAG]:len() > 64 then
    3. Data[TAG] = string.sub(Data[TAG], 1, 64)
    4. print ('ic-dicomnormalizer.lua UPACS THREAD ' .. Association.Thread .. ': has truncated: ' .. Data[TAG] ..' to 64 characters')
    5. end
    6. end

    Definitely can't go without retries. We've given that an attempt in the past but there are too many erroneous failures to rely on that. The retry firing on the entire converter is helpful for a few reasons, we have the converters notifying an influxdb database of attempts so we can monitor through grafana. Seeing in grafana when something is trying repeatedly is one of our countermeasures to manage 'clogs.'

    Moving to 1.5.0 is something we could consider. We've not moved to 1.4.19d yet so an upgrade is somewhat due. When do you foresee 1.5.0 releasing?

    We are using CQ for many workflows and enforce sending retries when transfers fail. The behavior we see displayed is that a failed send will retry indefinitely until success and will not allow any other objects to send through the EC until it is handled as the EC expects.

    Due to this, we are separating workflows into different ECs so that they do not impact one-another when there is a transfer problem. We're getting closer and closer to the documented limit of 20 ECs. Can this be extended and/or is there a way to review the retry behavior so the ECs do not completely halt when there are transfer problems?


    My site is looking to try out DICOM TLS but we don't seem to have a software solution which supports it natively. marcelvanherk is this feature possibly coming in a future release?

    Is there another way I could may this function without native support within dgate?

    Mem increase has had no effect. We observed this issue again today.

    What's so curious is this effects ExportConverters themselves. We have more than one ExportConverter sending to the same destination and one will fire successfully while the other does not. Stop/Start dgate and everything is fine again.

    Hi frank, we've got a Centos7 cluster running CQ. We did not compile ourselves but do have a working (though rarely used) web portal and can probably help. Our webserver setup is a bit hacky from my point of view but then again that could be on me more than anything else.

    Presuming you have dgate and supporting webserver config in your cgi-bin, is that true?

    File permissions accurate?

    File ownership accurate?

    Newweb or classicweb?

    This occurred again today on one node in the cluster. This time, it didn't appear to be driven by load as we have observed in the past.

    The red line in the top-left of the graph is the rough window in time vlconquest01 was misbehaving (failing to send all studies which stored). Stopped and started dgate and the behavior returned to normal.

    Hi Marcel - do you have a preference for how requests come to you? I can name a few right now, but I don't want to dump in the wrong location. One (seemingly) small enhancement that would be nice for us would be if timestamps in logfiles would include milliseconds.

    We have been holding steady (4 full days) since adding the additional vCPU to nodes in the cluster (from 2cpu to 3cpu on 4 nodes). We want to avoid additional change so we can be more sure about the root cause.

    Our Import and Export converters call scripting as of a few weeks ago which passes data to an influxDB using curl so we can visualize stats through Grafana. It's looking like the additional overhead from curl/HTTP was somehow causing this behavior.

    We upgraded yesterday actually. Unfortunately that did not resolve the issue.

    /edit: we have also now added an additional vCPU to the guest in case this was somehow related to processing power.

    /edit2: Marcel - I am reading in other threads about how you recommend setting up a forwarder and am unclear how it really is best to do so. We have settings similar to what you have outlined here Exporter Failures but sometimes see images missing from series that need to be redelivered. Speed is a big factor in what we do, so adding in a delay is a little scary sounding... but if we can't rely on the EC's to send everything without doing so that would be nice to know. For a group using ConQuest as a router, how would you recommend we develop our import/export converters to assure 100% store accuracy?

    /edit3: we are running on Linux just as an fyi


    We've observed behavior like this when certain abstract syntax selections and have disabled them. You should look at the syntax being negotiated in 1.4.17 and modify your 1.4.19 dgatesop.lst to encourage that preferred method.