From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Code signing LabVIEW executables

Solved!
Go to solution

When I build my applications into executables, the app builder creates a setup.exe file, and a number of supports files and folders.  I can sign the setup.exe file no problem using the signing utility that comes with WDK.  My question is this, since I only sign the setup.exe file, then isn't it possible for someone to insert a virus into one of the support files and still have a valid signature?... MY signature?  The purpose of signing code is to prevent this sort of thing, so I wondered if any of you have solved this problem, and if so, how?  Thanks.

 

0 Kudos
Message 1 of 13
(5,593 Views)

Nobody has run into this?

0 Kudos
Message 2 of 13
(5,541 Views)

@rickford66 wrote:

Nobody has run into this?


It's more likely that no one that has run into this has read your question yet, becuase it was posted on a Saturday. I don't have an answer for you, but I would wait until Monday, as you will get many more people reading your thread then.

0 Kudos
Message 3 of 13
(5,536 Views)
Solution
Accepted by topic author rickford66

According to this knowledgebase, you only need to sign the .exe file to protect your application (or the setup.exe if you’ve built an installer), so it is unlikely that digitally signing support files is necessary for security.

 

This page has a lot of information on the more general aspects of code signing if you’re interested in learning more, including information about Windows Authenticode (which WDK uses to digitally sign an applicataion) and the file types it supports.

 

0 Kudos
Message 4 of 13
(5,487 Views)

Thanks for the info.  I see that NI recommends only to sign the exe file, but doesn't state why that's enough.  I know it's enough to make Windows happy, but does it leave a back door there for embedding a virus?

0 Kudos
Message 5 of 13
(5,480 Views)

It really depends on your definition of Virus and also the architecture of your app. If your app decides to load VIs dynamically from support libraries then someone could go and modify that support library without touching the main executable. However signing that support library would mean nothing for Windows as for Windows a VI library is simply a data file that it will not automatically scan for Authenticode when its accessed.


Same if your LabVIEW apps implements for instance a scripting engine that can execute external scripts. If your scripting engine supports access to core system functionality through its syntax, it could be used by an attacker by modyfying the external scripts accordingly. Again a text file, which a script file usually is, can not be signed with Authenticode so standard signing is no solution. Also I doubt you want to complicate things like script modifications by requiring the user to go through an authentication step after each change.

 

So can you create a totally bullet proof application? Yes probably, but at horrible costs. Usually there is a really secure and a secure enough. Where the line is between these depends on your situation, evaluation and judgement. But really secure usually also means really troublesome to use, to the point of unusability.

Rolf Kalbermatter
My Blog
Message 6 of 13
(5,459 Views)

Ok, that does make sense... and as for being too secure that it's unusable, I think we only need look at how Vista was received to see an example of that.  haha   Thanks everyone for your help.

0 Kudos
Message 7 of 13
(5,438 Views)

@rickford66 wrote:

Ok, that does make sense... and as for being too secure that it's unusable, I think we only need look at how Vista was received to see an example of that.  haha   Thanks everyone for your help.


Except that a piece of software like Windows can't really be made totally secure, proven by the fact that there have been numerous security patches released since, for Vista and Windows 7, that all reside in common components that have been present in both.

 

The problem with Vista was, that the additional security measures added in there were not very well thought out. More like an afterthought tacked onto it with bolts and nuts, rather than a properly designed and integrated security system. That is not to say that Windows NT itself didn't have all the security components to make it a very safe environment, but due to legacy concerns most of them were more or less bypassed by a standard user desktop installation. As such they were seldom exercised and when MS decided to tighten the default security level they run into lots of usability troubles that large IT departements had bypassed in the past with third party add ons, when tightening the security level for their internal computers from what Windows did by default.

 

But I think it's safe to say that the only really totally safe computer is one that is powered off and locked in a safe. Smiley Very Happy

And the biggest security risk is not the software in the computer, but sits in front of the keyboard! People can do very unsafe things, out of lazyness, thoughlessness or simple ignorance.

 

Rolf Kalbermatter
My Blog
0 Kudos
Message 8 of 13
(5,399 Views)

@rolfk wrote:
...

And the biggest security risk is not the software in the computer, but sits in front of the keyboard! People can do very unsafe things, out of lazyness, thoughlessness or simple ignorance.

 


Layer 8 Problem 😄 

Really hard to solve... need to get rid of it 😉

 

 

Greetings from Germany
Henrik

LV since v3.1

“ground” is a convenient fantasy

'˙˙˙˙uıɐƃɐ lɐıp puɐ °06 ǝuoɥd ɹnoʎ uɹnʇ ǝsɐǝld 'ʎɹɐuıƃɐɯı sı pǝlɐıp ǝʌɐɥ noʎ ɹǝqɯnu ǝɥʇ'


0 Kudos
Message 9 of 13
(5,383 Views)

One thing I saw done at a customer (not by me) was some sort of checksum/CRC/etc done on the support files.  I think they used an MD5 hash.

 

So, at the bginning of the software run, they would check every test VI (they were loading them from a directory or library) and confirm the MD5 match.

 

Now, I think was done more for identifying changed VIs and not security.  I think the MD5 values were just stored in a text file.  If the company modified a VI, they updated the text file with the new hash.  This way, they were able to catch if code had been unintentionally modified.

 

You could certainly hardcode these values into your code for certain situations, but I am not sure how you could apply it to dynamically built libraries.  For example, if you wanted to MD5 the data.llb file from your EXE, you wouldn't know it until the EXE is built, but then you can't hardcode it into your EXE.

 

This certainly would not have Windows ID a problem, but your code could ID that something has been modified that is unexpected.

0 Kudos
Message 10 of 13
(5,378 Views)