Posts by qcor

    Unfortunately I can't see any difference using 1.5.0 (unless I should do sth more than the usual .exe/dll replacements)


    you could also instruct conquest to send data in parallel, using multiple moves at once.

    This sounds interesting... Could you please elaborate a bit about this? Would it be possible to respond to a clients query with parallel move?? like, for example, send each series simultaneously?

    I'm not sure how would it work... can this be done using lua? If possible this would speed up sending considerably (not so long ago I was experimenting with this a bit and found out that speed gain from using 4 parallel 'lines' gave me close to 4x the speed. I expected to hit some diminishing returns at this point but no - the gain was almost linear.

    Thx for quick answer

    Pretty much all the clients report 16k PDU except for Efilm which uses 64k PDU size.

    Is it possible to force larger blocks? or is this one of those things which has to be negotiated and accepted by both sides (server and client)?

    This is becoming a real issue because with each passing year more and more doctors prefer to work from remote locations.

    Right now it's sometimes (if a client has about 50ms latency) faster to zip whole study, download it via shared folder and then unpack it than send it via dicom protocol. :(

    I'll try 1.5.0 and compare the results.


    Hi everyone

    We have a doctor in a remote location connected via vpn. He has a decent net - speed test showing around 40/8Mbit connection with ping oscillating in 15-25ms range (stable, no packet drops).

    The problem is the speed - it's nowhere near the network cap. From what I can see it's about 2-3 images per second (700-800kb/s)

    At the same time I can download the same study at a rate of 20+ images per second (5-6Mb/s).

    Same vpn, same client (radiant), same server (1.4.19d1). The only real difference is that I'm much closer so much better ping to the server (100/100Mbit with 2ms ping).

    The interesting thing is that he can do a little trick using Horus/Osirix - they have an option to download a single series - so basically he can order the download of each series separately >at the same time< making whole process parallel and thus MUCH faster.

    So clearly it IS possible to download the study faster (I can do it, and he can do it using this trick) so the network bandwidth is not the problem. Nor the server, nor the client..

    The only thing left is latency and network overhead.

    Now the 100$ question - is there a way to speed up sending? Some setting maybe? Some way to make it parallel? Is dicom standard really this sensitive to latency or am I missing sth here?

    Hi all

    Lately I had to add 2 new exporters and since then strange things started to happen.
    Until now I was using this set of 7 exporterconverters:

    1. ExportConverters =7ExportModality0 =CTExportConverter0 =xx.exeExportModality1 =MRExportConverter1 =xx.exeExportModality2 =CRExportConverter2 =xx.exeExportModality3 =USExportConverter3 =xx.exeExportModality4 =MGExportConverter4 =xx.exe; forward compressed as j2 to AIO;ExportConverter5 = ifequal "%V0032,1033", "CZM"; forward compressed as j2 to FWD;ExportModality6 =PXExportConverter6 =xx.exe

    and everything was working as expected.. for years, but as soon as I changed ExportConverters to 9 and added this:

    1. ExportModality7 =CT
    2. ExportCallingAE7 =AN_CT
    3. ExportConverter7 =forward compressed as un to Op1
    4. ExportModality8 =CR
    5. ExportCallingAE8 =NXCRM
    6. ExportConverter8 =forward compressed as un to Op1

    strange hang ups started to happen. It works for some time and then suddenly I get a call from CT technicians telling me that they can't send images to pacs... so I restarted service and it works again for few hours.. then again it suddenly stops accepting images.

    Am I missing some obvious mistake in config here or..?
    dgate ver 1.4.17d


    I'm not sure I understand. I just tested this on both viewers and outcome is still the same.

    Query from Radiant:
    1) smith generates -> LIKE E'%smith%'
    2) *smith* generates the same -> LIKE E'%smith%'

    Query from Efilm:
    1) smith generates -> LIKE E'SMITH%'
    2) *smith* generates -> LIKE E'%SMITH%%'

    In both cases resulting db queries are case sensitive, so I'm not sure what you meant by this.
    The only way this could work is by using ILIKE instead of LIKE, but ILIKE has it's own problems - if I remember correctly it has trouble using index scans and seq scans on big db can be painful :(

    Hello everyone

    I'd like to ask how do you deal with case sensitive name queries from different dicom clients. For example:

    Patient name in my pg database looks like this: "John Smith"

    Let's say doctor typed in "smith" and hit search button.
    Now, depending on the viewer, there are few cases:
    1) some dicom viewers ( like Efilm for example) ask for "SMITH" (to be more precise: ...WHERE DICOMStudies.PatientNam LIKE E'SMITH%'...)
    2) other viewers (like Radiant) leave everything as it was so it'll ask for "smith"

    so.. they both return 0 results.

    For now I can't even be sure if the name format in my db is always consistent/correct - 99% of the time study is made based on worklist entry so it'll be like 'John Smith' but sometimes sth goes wrong and study is made manually.. so I can not guarantee that someone didn't type in "John SMITH' for example.
    This can be easily fixed by ImportConverter using '^' ... so let's say I'll use it to guarantee that ALL entries in db will be in upper cases.

    Case 1 fixed.. but what about case 2?

    Can I force Conquest to always ask db for upper('smith')? as in ...WHERE DICOMStudies.PatientNam LIKE upper(E'smith%')\
    can I modify incoming c-find and replace 'smith' with 'SMITH' on the fly?
    maybe there is another easy solution I can't see?

    Well.. yes. They both use the same credentials from the same .ini file.
    also test b) wouldn't work then

    Just to be clear - it happens only when the service is already running and then I try to run gui with different user... as in:

    dgate service is already running (launched as USER_A) and then USER_B opens the GUI => error while clicking on 'browse db' tab
    dgate service NOT running then USER_B opens the GUI => no error

    sooo.. to sum it up, all possible cases:

    only GUI (launched as USER_A) = no error
    only GUI (launched as USER_B) = no error

    service and GUI (both launched by USER_A) = no error
    service and GUI (both launched by USER_B) = no error

    service (by USER_A) and GUI (by USER_B) = error
    service (by USER_B) and GUI (by USER_A) = error

    It definitely looks like service is putting a lock on some crucial resource (file? port? no idea) which is tied directly to a user who launched it


    Recently I had to delete many studies using dgate --deletestudy:study_id command.

    This is what I get in pacstrouble.log:

    20170715 01:23:42 ***Could not remove IOD f:\Data\1\58.30000010022506524004600000061_0003_000054_12670809220000.dcm

    Three things worth noting:
    1) it is not a permissions problem
    2) it happens rarely, sometimes it's only 1 file from the whole study, sometimes 5-10 files. In total it's a very very small % of all files
    3) I do this in 2 parallel threads (just 2 'ms-dos' windows, each running a .bat file containing list of studies to delete)

    Finding the reason is one thing but what's even more important to me right now is what is the exact behavior of dgate in such case.
    Let's say that 1 file was left behind because of this error - does that mean that the dicomstudies/dicomseries etc entry for that study is still in the database? ( in other words: do you wait for a confirmation of successful deletion of a file before removing it's db entry?)

    I spent last few hours trying to understand what is going on. Here is what I learned so far (which is not much I'm afraid)

    First - few key points worth noting:
    1) the Mag0 is a shared folder \\ARCHIWUM\
    2) there are 2 windows user accounts used USER_A and USER_B.
    USER_A is a standard user and is used to run dgate service (just to avoid running it with admin-level privileges)
    USER_B is an admin user which I normally use.
    Both users have access to mag0

    I tested 3 cases:
    a) USER_A runs the service. USER_A launches the GUI
    b) USER_B runs the service. USER_B launches the GUI
    c) USER_A runs the service. USER_B launches the GUI

    In case a) and b) everything works fine.
    In case c) GUI crashes (service also stops).

    I can see the credentials used to access mag0 (smb log) and they are fine.

    Running out of ideas... :(

    It's the newest one - 1419a, however this happens in earlier versions too (not sure how far back)


    it looks like it has sth to do with the owner of the service. It seems to be OK when launched as 'local system', but when launched as a specific user ( 'dicom_arch' in my case) I get the error.
    Not rly sure why.. it clearly has access to PG because it works just fine.. it's just the browser.. somehow it's different.


    Test shows that everything is fine as long as the GUI is launched as the same user as the service. In my case if the service is launched with 'log on as dicom_arch' option then the gui (conquestdicomserver.exe) must be launched also as 'dicom_arch' user. Launching the .exe as admin doesn't help.

    So I guess the mystery is solved. To be honest I'm not sure why it works that way.. but at least I know how to mitigate it.

    That is a very good question :D and "I have no idea" would be the answer.
    This is 3rd party exe which, from what I understand, is responsible for hl7 communication. From what I hear it reads tags from each image and based on that info sends hl7 to our worklist and RIS... but like I said it's 3rd party so I don't really know exactly.

    I just sent one study and out of 578 images only 19 were affected by this. (as in there are 19 new .jpg in printer_files). So it doesn't look like a permission issue.

    1. [sscscp]MicroPACS = sscscpEdition = Personal# Network configuration: server name and TCP/IP port#MyACRNema =ARPACS1TCPPort = 2005# Reference to other files: known dicom servers; database layout; sopsACRNemaMap = acrnema.mapkFactorFile = dicom.sqlSOPClassList = dgatesop.lst# Host(ignored), name, username and password for ODBC data sourceSQLHost = localhostSQLServer =mpacs1Username =mpacs1Password =mpacs1Postgres = 1BrowseThroughDBF = 1DoubleBackSlashToDB =1UseEscapeStringConstants = 1# Configure databaseTruncateFieldNames = 10MaxFieldLength = 254MaxFileNameLength = 255FixPhilips = 0FixKodak = 0KeepAlive = 0LargeFileSizeKB = 4096PrintSquareLandscape = 0UseKpacsDecompression = 1ZipTime = 05:UIDPrefix =1.2.826.0.1.3680043.2.1326.4EnableReadAheadThread = 1PatientQuerySortOrder = StudyQuerySortOrder = SeriesQuerySortOrder = ImageQuerySortOrder = EnableComputedFields = 1IndexDBF = 1PackDBF = 0LongQueryDBF = 1000TCPIPTimeOut = 300FailHoldOff = 60RetryDelay = 100RetryForwardFailed = 0ImportExportDragAndDrop = 1QueueSize = 128WorkListMode = 0WorkListReturnsISO_IR_100 = 1DebugLevel = 0Prefetcher = 0LRUSort = AllowTruncate = DecompressNon16BitsJpeg = 1UseBuiltInJPEG = 1LossyQuality = 95IgnoreOutOfMemoryErrors = 0NoDICOMCheck = 0PadAEWithZeros = 0FileNameSyntax =4# Configuration of compression for incoming images and archivalDroppedFileCompression = j2IncomingCompression = j2ArchiveCompression = j2# Names of the database tablesPatientTableName = DICOMPatientsStudyTableName = DICOMStudiesSeriesTableName = DICOMSeriesImageTableName = DICOMImagesDMarkTableName = DICOMAccessUpdatesRegisteredMOPDeviceTable = RegisteredMOPIDsUIDToMOPIDTable = UIDToMOPIDUIDToCDRIDTable = UIDToCDRID# Banner and host for debug informationPACSName =ARPACS1OperatorConsole = Configure email of error messagesMailHost = MailPort = smtpMailSignon = MailFromName = MailRcptName1 = MailCollectTime = 1MailWaitTime = 10# Configuration of disk(s) to store imagesMAGDeviceThreshhold = 0MAGDeviceFullThreshHold = 30IgnoreMAGDeviceThreshold = 0MAGDevices = 1MAGDevice0 = f:\Data\MAGDevice1 = f:\Data1\NightlyCleanThreshhold = 0# Configuration of converter programs to export DICOM slicesExportConverters =2ExportModality0 =CTExportConverter0 =AddFile2.exeExportModality1 =MRExportConverter1 =AddFile2.exeImportConverters = 1ImportConverter0 = ifequal "%m", "SR"; destroy;

    All I see in log looks ok to me. No errors or anything. Just this:

    1. 20150628 16:56:07 [recompress]: recompressed with mode = j2 (strip=0)
    2. 20150628 16:56:07 Written file: f:\Data\71041414794\
    3. 20150628 16:56:07 [recompress]: recompressed with mode = j2 (strip=0)
    4. 20150628 16:56:07 Written file: f:\Data\71041414794\

    It is running as a service.

    No one can help? :( I need to fix that somehow. When the number of those files reaches such big numbers it begins to impact other things because windows just can't handle so many files in one folder. Yes I can use scheduler to clear it but I'd rather know WHY this is happening.
    Last time I tried to open this folder I waited over an hour and then just gave up.. had to delete it all. (btw deleting took >5hours, 1.400.000 files as it turned out)

    update: so I now got rid of those files, updated to latest version of dgate and I already have 1500 new files there... so they are still created. Question is WHY and how to turn it off?


    I just realized that my printer_files is full of files... I mean hundreds of thousands of files, all .jpg
    From what I understand this folder is used as a tmp for compression process, am I right?
    So it looks like those jpgs are not deleted after compression.. maybe... no idea. I use j2 as a compression method.

    Do you know what may be causing this kind of behavior? Unfortunately this node was not updated for a looong time and is still 1.4.17beta3. Is that a known bug in this version? If so, can I safely delete those files? (already upgraded to latest version)

    I see.. well, if it is a known problem then it's OK.

    About name changes - I didn't have a choice. Basically I was forced to create a view of this table because of some bad design of 3rd party software which was killing PG with poorly optimized query of this table.

    So the bottom line is that "dgate -v --amMAG0.Archiving,MAG1" will just NOT work with changed table name, am I right? Any workaround maybe?