Sunday, May 1, 2011

Not letting users make mistakes vs. giving them flexibility

I'm working on a product which is meant to be simple to use and simple to set up, the competition largely requiring a long set up period and in some cases going as far as a bespoke solution for each customer. One part of our application is now expanding based on customer requests and it is looking like we'll need to make it very flexible so each customer can have a lot of control over how it behaves for them. The problem being that I don't want to make the system too configurable, as I believe this then makes it more complex to learn and to work with. I'm also concerned it opens the door to someone messing things up for themselves, kind of like handing them a gun, although I'm not actually pointing it at their foot for them.

Has anyone else faced a similar dilemma of putting power in users hands? How did you solve it? and what was the result?

From stackoverflow
  • I highly recommend you read Joel's Controlling Your Environment Makes You Happy, which can be described as a treatise on user interface design but is really about usability with a healthy dab of psychology thrown in.

    The section I'm referring to is Choices:

    Every time you provide an option, you're asking the user to make a decision.

    This is something I strongly agree with. Many developers, product managers and so on take the easy route and instead of figuring out what users actually need, they just give them a choice. You see this in enterprise bloatware like Clearcase or PVCS where there are so many options--90% of which you'd never change--indicating the designers have tried to make it all things to all men rather than doing one or two things exceptionally well.

    Instead it just does lots of things badly.

    Keep it simple, follow conventions, don't overwhelm the user with pointless and unnecessary choices and make the software behave like a normal user would expect. That alone would set you apart from an awful lot of other products.

  • The answer with this lies in who your end-users are. I used to write software that got used by professional sports coaches. While these guys were definitely good at what they did, they were hardly proficient at computer use, so our configurability was kept to a minimum (at least as far as what could be done in the GUI).

    On the other hand, if you're dealing with power users, adding options is usually not a bad thing as long as they aren't intrusive.

    It's all about who's going to be getting them.

  • Read Jeff Atwood's Training Your Users. It's a great article with some very useful links.

  • Personally I like the TurboTax model (http://turbotax.intuit.com/). When creating a tax return, I get a simple, tell-me-like-I'm-five wizard that takes me step-by-step through the process, but I can step outside the process at any time and use more advanced features, returning to the process later.

    Make it easy and simple and uncluttered for your user to do what they're going to do 80% of the time, but give them the power to deliberately step outside of the norm.

    Gavin Miller : +1 Intuit's interfaces for taxes are an excellent lesson in UI.
  • I don't normally like to subscribe to the idea that all users are stupid, but there is a rule which can still be applied:

    If you give them the opportunity, they WILL break it

    Now it is up to you whether or not to give them the ability to do potentially dumb things. Or better yet, develop it so that when they do do the stupid voodoo that they do, it can be reverted or recovered from error state gracefully.

    belgariontheking : I am reminded of the Spider of Doom. Now there is a very stupid user. http://thedailywtf.com/Articles/The_Spider_of_Doom.aspx
  • I like the approach of Firefox towards this. The basic options are accessible in the option menu, all the rest is under about:config. Thus you have an easy interface and an incredible flexibility if you need it.

  • Interesting timing for your question. In the U.S. this is Income Tax week. Filling out the ol' 1040 and associated subforms should give us some sympathy for what users endure.

    Lessons I take away are:

    Only ask questions that relate to the user domain; avoid questions relating to the software system; and if you can derive the answer or suggest a most likely answer, do so.

    Put related questions together (as long as they are normally entered by the same person using data most likely available at the same place and time, which is the definition of related for these purposes).

    Make it support incremental input. It should be easy to enter the data they have, and defer completing it when the rest is available.

    Show status validity and completeness. Make it clear and obvious how far they are to having validatable data.

    Make it interruptable. Make sure it's possible to interrupt the process, leave the application, come back, and resume where they left off.


    Yup, it's harder to program. Embrace it.

  • I've had great success, and been happiest as an user, when using sensible defaults. In other words, make the most common use case easy (or even better, free), but give users the ability to step outside of that use case when the situation calls for it.

  • There are at least two ways to build a good software product:

    1. Focus on a narrow set of functionality, and implement that functionality very well.

    2. Design your system to be customizable (ideally, through scripting.) If you do the base system right, it will be easy to provide the basic, no options, just-do-what-I-want functionality on top of the customization layer.

    Unfortunately, there are many more ways to create a bad software product.

  • Your questions implies that you can either provide a flexible solution OR make it foolproof.

    I wouldn't put it like that. To me this is rather a matter of user expectations and the question in the first place would be: How can I meet all important user expectations (even if they conflict with each other) without corrupting the application?

    For instance a web application which has a menu, breadcrumb navigation, a site map and a search offers together with the inline links five different ways to find what you're looking for and how to go there. That way most users can find fast and easily the functionality they are expecting and therefore the need for an extensive documentation might actually decrease.

    So the answer might be to offer several different carefully chosen ways to solve one specific task, while each of them can be streamlined independent to avoid user mistakes.

winapi: CreateProcess but hide the process' window?

I am using CreateProcess to create a cmd.exe process that is passed a parameter that it executes and quits, this makes command prompt flash up on the screen.

I tried to avoid this by setting STARTUPINFO struct wShowWindow to SW_HIDE but this parameter seems to affect the calling window, not the window for the process that gets executed.

Is there anyway that you can use createprocess to launch a program that is hidden from view?

Also what is the proper winapi standard way to get enviroment variables?

From stackoverflow
  • set the STARTF_USESHOWWINDOW in dwFlags

    by sharptooth

  • The following link here describes how to create the window silently:

    DWORD RunSilent(char* strFunct, char* strstrParams)
    {
        STARTUPINFO StartupInfo;
        PROCESS_INFORMATION ProcessInfo;
        char Args[4096];
        char *pEnvCMD = NULL;
        char *pDefaultCMD = "CMD.EXE";
        ULONG rc;
    
        memset(&StartupInfo, 0, sizeof(StartupInfo));
        StartupInfo.cb = sizeof(STARTUPINFO);
        StartupInfo.dwFlags = STARTF_USESHOWWINDOW;
        StartupInfo.wShowWindow = SW_HIDE;
    
        Args[0] = 0;
    
        pEnvCMD = getenv("COMSPEC");
    
        if(pEnvCMD){
    
         strcpy(Args, pEnvCMD);
        }
        else{
         strcpy(Args, pDefaultCMD);
        }
    
        // "/c" option - Do the command then terminate the command window
        strcat(Args, " /c "); 
        //the application you would like to run from the command window
        strcat(Args, strFunct);  
        strcat(Args, " "); 
        //the parameters passed to the application being run from the command window.
        strcat(Args, strstrParams); 
    
        if (!CreateProcess( NULL, Args, NULL, NULL, FALSE,
         CREATE_NEW_CONSOLE, 
         NULL, 
         NULL,
         &StartupInfo,
         &ProcessInfo))
        {
         return GetLastError();  
        }
    
        WaitForSingleObject(ProcessInfo.hProcess, INFINITE);
        if(!GetExitCodeProcess(ProcessInfo.hProcess, &rc))
         rc = 0;
    
        CloseHandle(ProcessInfo.hThread);
        CloseHandle(ProcessInfo.hProcess);
    
        return rc;
    
    }
    

    I think getenv and setenv are all okay? I am not sure what you are asking about in that respect.

  • This might be an overkill for your needs, but you can hook the ShowWindow API and never show any windows for that process

  • If its just a console app you can also use the CREATE_NO_WINDOW flag as part of the CreateProcess call itself, e.g.

    CreateProcess(NULL, lpszCommandLine, NULL, NULL, FALSE, 
                  CREATE_NO_WINDOW, NULL, NULL, &si, &pi);
    

    Also, see this page for information about environment variables.

how to create a custom file extension in C#?

I need help in how to create a custom file extension in my C# app. I created a basic notes management app. Right now I'm saving my notes as .rtf (note1.rtf). I want to be able to create a file extension that only my app understands (like, note.not, maybe)

From stackoverflow
  • File extensions are an arbitrary choice for your formats, and it's only really dependent on your application registering a certain file extension as a file of a certain type in Windows, upon installation.

    Coming up with your own file format usually means you save that format using a format that only your application can parse. It can either be in plain text or binary, and it can even use XML or whatever format, the point is your app should be able to parse it easily.

  • I think it's a matter of create the right registry values,

    or check this codeproject's article

    peSHIr : And I would let your deployment system (like a Setup project, generation an MSI, or whatever it is you use) take care of the exact details of registering.
  • You can save file with whatever extension you want, just put it in file name when saving file.

    I sense that your problem is "How I can save file in something other than RTF?". You'll have to invent your own format, but you actually do not want that. You still can save RTF into file named mynote.not.

    I would advise you to keep using format which is readable from other programs. Your users will be thankful once they want to do something with their notes which is not supported by your program.

  • There are two possible interpretations of your question:

    What should be the file format of my documents?

    You are saving currently your notes in the RTF format. No matter what file name extension you choose to save them as, any application that understands the RTF format will be able to open your notes, as long as the user knows that it's in RTF and points that app to that file.

    If you want to save your documents in a custom file format, so that other applications cannot read them. you need to come up with code that takes the RTF stream produced by the Rich Edit control (I assume that's what you use as editor in your app) and serializes it in a binary stream using your own format.

    I personally would not consider this worth the effort...

    What is the file name extension of my documents

    You are currently saving your documents in RTF format with .rtf file name extension. Other applications are associated with that file extension, so double-clicking on such file in Windows Explorer opens that application instead of your.

    If you want to be able to double click your file in Windows Explorer and open your app, you need to change the file name extension you are using AND create the proper association for that extension.

    The file extension associations are defined by entries in the registry. You can create these per-machine (in HKLM\Software\Classes) or per-user (in HKCU\Software\Classes), though per-machine is the most common case. For more details about the actual registry entries and links to MSDN documentation and samples, check my answer to this SO question on Vista document icon associations.

  • As a deployment point, you should note that ClickOnce supports file extensions (as long as it isn't in "online only" mode). This makes it a breeze to configure the system to recognise new file extensions.

    You can find this in project properties -> Publish -> Options -> File Associations in VS2008. If you don't have VS2008 you can also do it manually, but it isn't fun.

  • Can any1 provide me with a straight forward code which i can apply to my application and which wil work fine.......................thnx in advance.....

    Marc Gravell : I'd argue that the ClickOnce option is the simplest, since the VS IDE will do all the work for you.
  • the problem here is we are all looking forward to a set of coding parameters to create an extension for each program; anybody that has a sample coding for it should please post it

When is it good to use embedded script language like Lua

Hi,

I'm playing WoW for about 2 years and I was quite curious about Lua which is used to write addons. Since what I've read so far about Lua was "fast", "light" and "this is great", I was wondering how and when to use it.

What is the typical situation where you will need to embed a script language like Lua in a system ?

From stackoverflow
  • Rapid development for application with real-time constraints. Computer games are one of these ;-)

    Javier : other non-game apps using Lua both for extensibility and for making it easy to develop/refactor: apt, nmap, monotone, Adobe LightRoom, love2d, wildpockets
  • It's a valid solution if you want to allow third parties to develop plug-ins or mods for your software.

    You could implement an API in whatever language you are using, but a script language like LUA tends to be more simple and accessible for casual developers.

  • When you need end users to be able to define/change the system without requiring the system to rewritten. It's used in games to allow extensions or to allow the main game engine to remain unchanged, while allow content to be changed.

  • Lua is:

    • Lightweight
    • Easy to integrate, even in an asynchronized environment such as a game
    • Easy to learn for non-programmer staff such as integrators, designers and artists

    Since games usually require all those qualities, Lua is mostly used there. Other sitation could be any application that needs some scripting functionality, but developers often opt for a little more heavy weight solution such as .Net or python.

  • Embedded scripting languages work well for storing configuration information as well. Last I checked, the Mozilla family all use JavaScript for their config information.

    Next up, they are great for developing plugins. You can create a custom API to expose to the plugin developers, and the plugin developers gain a lot of freedom from having an entire language to work with.

    Another is when flat files aren't expressive enough. If you want to write data driven apps where behavior is parameterized, you'll get really tired of long strings of conditionals testing for config combinations. When this happens, you're better off writing the rules AND their evaluation into your config.

    This topic gets some coverage in the book Pragramtic Programmer.

  • In addition to the scripting and configurability cases mentioned, I would simply state that Lua+C (or Lua+C++) is a perfect match for any software development. It allows one to make an engine/usage interface where engine is done in C/C++ and the behaviour or customization done in Lua.

    OS X Cocoa has Objective-C (C and Smalltalk amalgam, where language changes by the line). I find Lua+C similar, only the language changes by a source file, which to me is a better abstraction.

    The reasons why you would not want to use Lua are also noteworthy. Because it hardly has a good debugger. Then again, people hardly seem to need one either. :)

  • In addition to all the excellent reasons mentioned by others, Embedding Lua in C is very helpful when you need to manipulate text, work with files, or just need a higher level language. Lua has lots of nifty feature (Tables, functions are first class values, lots of other good stuff). Also, while lua isn't as fast as C or C++, it's pretty quick for an interpreted language.

  • a scripting language like LUA can also be used if you have to change code (with immediate effect) while the application is running. one may not see this in wow, because as far as i remember the code is loaded at the start (and not rechecked and reloaded while running).

    but think of another example: webserver and scripting language - (thankfully) you can change your php code without having to recompile apache or restart apache.

    steve yegge did that thing for his own mmorpg engine powering wyvern, using jython or rhino and javascript (can't remember). he wrote the core engine in java, but the program logic in python/javascript.

    the effect of this is:

    • he doesn't have to restart the core engine when changing the scripts, because that would disconnect all the players
    • he can let others do the simpler programming like defining new items and monsters without exposing all the critical code to them
    • sandboxing: if an error happens inside the script, you may be able to handle it gracefully without endangering the surrounding application

Access the BPM ID3 tag in iPhone OS 3.0

Is there any way to access the BPM (beats per minute) ID3 tag of a song on your iPod using the iPhone OS 3.0 SDK? I'm looking at

https://developer.apple.com/iphone/prerelease/library/documentation/MediaPlayer/Reference/MPMediaItem_ClassReference/Reference/Reference.html

and i don't see it:

NSString const MPMediaItemPropertyPersistentID;      / filterable */
NSString const MPMediaItemPropertyMediaType;         / filterable */
NSString const MPMediaItemPropertyTitle;             / filterable */
NSString const MPMediaItemPropertyAlbumTitle;        / filterable */
NSString const MPMediaItemPropertyPodcastTitle;      / filterable */
NSString const MPMediaItemPropertyArtist;            / filterable */
NSString const MPMediaItemPropertyAlbumArtist;       / filterable */
NSString const MPMediaItemPropertyGenre;             / filterable */
NSString const MPMediaItemPropertyComposer;          / filterable */
NSString *const MPMediaItemPropertyPlaybackDuration;
NSString *const MPMediaItemPropertyAlbumTrackNumber;
NSString *const MPMediaItemPropertyAlbumTrackCount;
NSString *const MPMediaItemPropertyDiscNumber;
NSString *const MPMediaItemPropertyDiscCount;
NSString *const MPMediaItemPropertyArtwork;
NSString *const MPMediaItemPropertyLyrics;
NSString const MPMediaItemPropertyIsCompilation;     / filterable */

From stackoverflow
  • If you don't see it, submit a Radar to have it included, or perhaps to ask for an API with a more general means of querying tag values.

Custom about dialogs in Eclipse applications

Hi,

I have an Eclipse workbench application composed of many plugins. I'd like to set the about text of the app to reflect the build number.

There seems to be a couple of places to do this: - Inside the application's .product file (setting text here doesn't seem to do anything?) - Inside the plugin.xml file of the main plugin (this does seem to work).

I have two questions:

  1. What is the best way to achieve this? via plugin.xml?
  2. Why is the text I set in the product file ignored?
From stackoverflow

How to markup scientific names in XHTML?

I would like to know the best way to markup scientific names in XHTML. This answer says I should use the <i> element, which I'm not too in favour of, unless of course, it is the right element to use.

I could use <span class="scientific">, but seeing as one of the first uses of HTML was to markup scientific documents, I'd imagine there'd be a better semantic hook for this sort of thing.

Also, is there any element to markup the common name of a scientific name?

Note: It looks like Wikipedia, or at lease this article is using <i> for scientific nams.

Edit: Would the <dfn> tag be appropiate?

From stackoverflow
  • dfn is for a definition of something.

    <p>The prefix cardio- means <dfn>of the heart</dfn>.</p>
    

    As far as I can see in the list of HTML 4 elements nothing specifically fits the bill. This leaves you with a few options:

    <span class="scientific">cardio</span>
    

    The semantics are added by the class, and so this is probably the most correct way, technically. However, it does have the downside that without your CSS, it won't appear different in any way to the surrounding text. Another option might be this: /me prepares to duck for cover

    <i class="scientific">cardio</i>
    

    Now before I get my head bitten off for using the verboten element, <i>, consider that it is no less descriptive than using <span>, and even if a stylesheet were missing, you'd still get vaguely the correct formatting. Just make sure you add the class attribute.

    alex : Good answer, Nick. Hopefully you don't incur any downvotes from people who see and go urgh! I think I'll go with as it is used by Wikipedia and it seems to fit the bill. Thanks for the answer.
    nickf : oh, Wikipedia doesn't use any semantics at all in its markup, so I wouldn't use it as a guide. Click Edit on any page and you'll see why. Rather than get their users to learn the correct classes to use, etc, they go for a very simple markup... one step back from WYSIWYG, really.
    alex : Yeh, and I bet they don't use or with their revisions?
    nickf : haha oh man, that'd be so messy.
    alex : Nick, according to http://htmlhelp.com/reference/html40/deprecated.html it hasn't been deprecated, just has lost it's presentational meaning... it does not mean italicize text anymore...
    alex : it's also listed here: http://www.w3.org/TR/html401/index/elements.html
    nickf : oh, well there you go!
    alex : It's interesting... most people (including me until recently) though and were deprecated.
    porneL : Your example is wrong. http://www.whatwg.org/specs/web-apps/current-work/multipage/text-level-semantics.html#the-dfn-element It should be

    The prefix cardio- means of the heart.

    nickf : oh that's... well... different to what I had expected. Looks like dfn might actually be the right way to do it!

How to add reports in MVC application?

In the add new item there is an option to add report and add report wizard. But I don't know how to work with that. Is there any blog or video available to learn that?

From stackoverflow
  • Yes I am absolutely sure of it. Have you checked YouTube?

    Vikas : can you post links?

What causes this error with for...in after assigning Array.prototype.indexOf?

I was surprised when I was able to reproduce a bug with a minimum amount of code. Note that in this minimalist example Array.indexOf isn't being called. Also note that I've tried several different implementations of indexOf, including several from stackoverflow.com.

The bug is, when the for...in executes in IE, three alerts are displayed: "indexOf", "0", and "1". In FF, as one would expect, only two ("0", "1") appear.

<html>
<body onLoad="test();">
<script language="javascript">
   var testArray = ['Foo', 'Bar'];

   if(!Array.prototype.indexOf) {
      Array.prototype.indexOf = function (obj, fromIndex) {
         if (fromIndex == null) {
            fromIndex = 0;
         } else if (fromIndex < 0) {
            fromIndex = Math.max(0, this.length + fromIndex);
         }
         for (var i = fromIndex, j = this.length; i < j; i++) {
            if (this[i] === obj)
               return i;
         }
         return -1;
      };
   }

   function test() {
      var i;

      for(i in testArray) {
         alert(i);
      }
   }
</script>
</body>
</html>

Can anyone explain this? I've already changed my code to use a while so I'm not under the gun, but this one really has me stumped. It reminds me of memory overrun errors in c.

From stackoverflow
  • See "for in Intrigue" on the Yahoo! User Interface blog.

    The reason your code is working as expected in Firefox is because you haven't added your own indexOf method in Firefox. The for in loop iterates over all the keys in the object's prototype chain, including the indexOf method you added. Douglas Crockford suggests the following solution:

    for (var p in testArray) {
        if (testArray.hasOwnProperty(p)) {
         alert(testArray[i]);
        }
    }
    

    Alternatively, you can just filter out functions:

    for (var p in testArray) {
        if (typeof testArray[p] !== "function") {
         alert(testArray[i]);
        }
    }
    

    Also, as "nickf" points out, it is best not to use the for in loop for iterating over arrays. The for in loop is intended for iterating over the keys in an object.

    Steve

    overslacked : Thank you kindly; that's exactly what I needed.
    nickf : You really shouldn't be using for..in on arrays at all. See the link in my answer.
    Tomalak : @Steve: Since there is nothing broken, I wouldn't call using "hasOwnProperty()" a fix. ;-)
    Steve Harrison : @Tomalak: Point taken! I've changed it to "solution".
  • for .. in is meant for looping through object properties, definitely not arrays.

    Stick to the standard:

    for (var i = 0, l = myArray.length; i < l; ++i) { .. }
    

    More info at the Mozilla Developer Centre:

    A for...in loop does not iterate over built-in properties. These include all built-in methods of objects, such as String's indexOf method or Object's toString method. However, the loop will iterate over all user-defined properties (including any which overwrite built-in properties).

    Although it may be tempting to use this as a way to iterate over an Array, this is a bad idea. The for...in statement iterates over user-defined properties in addition to the array elements, so if you modify the array's non-integer or non-positive properties (e.g. by adding a "foo" property to it or even by adding a method or property to Array.prototype), the for...in statement will return the name of your user-defined properties in addition to the numeric indexes.

    overslacked : As I mentioned in the question, this is a minimalist example that only reproduces the problem; but your point is a good one.
    overslacked : Just to make sure I'm giving proper credits, your answer was 100% accurate; but it was the initial information provided in Steve's answer regarding why the problem was occurring at all that I needed. Thanks for the help!

Base 64 encode vs loading an image file.

So I am working on something in php where I have to get get my images from a sql database where they will be encoded in base64. The speed of displaying these images is critical so I am trying to figure out if it would be faster turn the database data into an image file and then load it in the browser, or just echo the raw base64 data and use:

<img src="data:image/jpeg;base64,/9j/4AAQ..." />

Which is supported in FireFox and other Gecko browsers.

So to recap, would it be faster to transfer an actual image file or the base64 code. Would it require less http request when using ajax to load the images?

The images would be no more than 100 pixels total.

From stackoverflow
    • Base64 encoding makes the file bigger and therefore slower to transfer.
    • By including the image in the page, it has to be downloaded every time. External images are normally only downloaded once and then cached by the browser.
    • It isn't compatible with all browsers
    Gary Richardson : also, base64 decoding is slow.
  • Generally, using base64 encoding is going to increase the byte size by about 1/3. Because of that, you are going to have to move 1/3 bytes from the database into the server, and then move those extra same 1/3 bytes over the wire to the browser.

    Of course, as the size of the image grows, the overhead mentioned will increase proportionately.

    That being said, I think it is a good idea to change the files into their byte representations in the db, and transmit those.

  • Don't think data:// works in IE7 or below.

    When an image is requested you could save it to the filesystem then serve that from then on. If the image data in the database changes then just delete the file. Serve it from another domain too like img.domain.com. You can get all the benefits of last-modified, or e-tags for free from your webserver without having to start up PHP unless you need too.

    If you're using apache:

    # If the file doesn't exist:
    RewriteCond %{REQUEST_FILENAME} !-f
    RewriteRule ^/(image123).jpg$ makeimage.php?image=$1
    
  • If you want the fastest speed, then you should write them to disk when they are uploaded/modified and let the webserver serve static files. Rojoca's suggestions are good, too, since they minimize the invocation of php. An additional benefit of serving from another domain is (most) browsers will issue the requests in parallel.

    Barring all that, when you query for the data, check if it was last modified, then write it to disk and serve from there. You'll want to make sure you respect the If-Modified-Since header so you don't transfer data needlessly.

    If you can't write to disk, or some other cache, then it would be fastest to store it as binary data in the database and stream it out. Adjusting buffer sizes will help at that point.

  • Why regenerate the image again and again if it will not be modified. Hypothetically, even if there are a 1000 different possible images to be shown based on 1000 different conditions, I still think that 1000 images on the disks are better. Remember, disk based images can be cached by the browser and save bandwidth etc etc.

  • Well I don't agree with anyone of you. There are cases when you've to load more and more images. Not all the pages contain 3 images at all. Actually I'm working on a site where you've to load more than 200 images. What happens when 100000 users request that 200 images on a very loaded site. The disks of the server, returning the images should collapse. Even worse you've to make so much request to the server instead of one with base64. For so much thumbnails I'd prefer the base64 representation, pre-saved in the database. I found the solution and a strong argumentation at http://www.stoimen.com/blog/2009/04/23/when-you-should-use-base64-for-images/. The guy is really in that case and made some tests. I was impressed and make my tests as well. The reality is like it says. For so much images loaded in one page the one response from the server is really helpful.

    Piskvor : The guy you mention seems to say that his images had 2MB (megabytes) when served from disk and went to 45KB (kilobytes) when served inline. That alone makes his case pretty dubious.

'Quick' Shell Scripting Help

I need help with this shell script.

  1. Must use a loop of some sort.
  2. Must use input data exactly as shown.
  3. Output redirection should be accomplished within the script, not on the command line.

Here's the input files I have: http://pastebin.com/m3f783597

Here's what the output needs to be: http://pastebin.com/m2c53b25a

Here's my failed attempt: http://pastebin.com/m2c60b41

And that failed attempt's output: http://pastebin.com/m3460e78c

From stackoverflow
  • This is homework, I assume?

    Read up on the sort and paste commands: man sort, man paste

  • Here's the help. Try to follow these as much as possible before looking at my solution below. That will help you out more in the long run, and in the short runsince it's a certainty that your educator can see this as easily as you can.

    If he finds you've plagiarized code, it will probably mean an instant fail.

    Your "failed attempt" as you put it is here. It's actually not too bad for a first attempt.

    echo -e "Name\t\t On-Call\t\t Phone"
    for daycount in 2 1 4 5 7 6 3
    do
        for namecount in 3 2 6 1 7 4 5
        do
            day=`head -n $daycount p2input2|tail -n 1|cut -f 2 -d " "`
            name=`head -n $namecount p2input1|tail -n 1|cut -f 1 -d " "`
            phone=`head -n $namecount p2input1|tail -n 1|cut -f 2 -d " "`
            echo -e "$name\c"
            echo -e "\t\t$day\c"
            echo -e "\t\t$phone"
            continue
        done
    done
    

    And here's the hints:

    • You have two loops, one inside the other, each occurring 7 times. That means 49 lines of output rather than 7. You want to process each day and look up up name and phone for that day (actually name for that day and phone for that name).
    • It's not really suitable hardcoding linenumbers (although I admit it is sneaky)- what if the order of data changes? Better to search on values.
    • Tabs make things messy, use spaces instead since then the output doesn't rely on terminal settings and you don't need to worry about misaligned tabs.

    And, for completeness, here's the two input files and the expected output:

    p2input1                  p2input2
    ========                  ========
    Dave 734.838.9801         Bob Tuesday
    Bob 313.123.4567          Carol Monday
    Carol 248.344.5576        Ted Sunday
    Mary 313.449.1390         Alice Wednesday
    Ted 248.496.2204          Dave Thursday
    Alice 616.556.4458        Mary Saturday
    Frank 634.296.3357        Frank Friday
    
    Expected output
    ===============
    Name            On-Call         Phone
    
    carol           monday          248.344.5576
    bob             tuesday         313.123.4567
    alice           wednesday       616.556.4458
    dave            thursday        734.838.9801
    frank           friday          634.296.3357
    mary            saturday        313.449.1390
    ted             sunday          248.496.2204
    

    Having said all that, and assuming you've gone away for at least two hours to try and get your version running, here's mine:

     1 #!/bin/bash
     2 spc20="                    "
     3 echo "Name            On-Call         Phone"
     4 echo
     5 for day in monday tuesday wednesday thursday friday saturday sunday
     6 do
     7     name=`grep -i " ${day}$" p2input2 | awk '{print $1}'`
     8     name=`echo ${name} | tr '[A-Z]' '[a-z]'`
     9     bigname=`echo "${name}${spc20}" | cut -c1-15`
    10
    11     bigday=`echo "${day}${spc20}" | cut -c1-15`
    12
    13     phone=`grep -i "^${name} " p2input1 | awk '{print $2}'`
    14
    15     echo "${bigname} ${bigday} ${phone}"
    16 done
    

    And the following description should help:

    • Line 1elects the right shell, not always necessary.
    • Line 2 gives us enough spaces to make formatting easier.
    • Lines 3-4 give us the title and blank line.
    • Lines 5-6 cycles through the days, one at a time.
    • Line 7 gives us a name for the day. 'grep -i " ${day}$"' searches for the given day (regardless of upper or lower case) at the end of a line in pinput2 while the awk statement gives you field 1 (the name).
    • Line 8 simply makes the name all lowercase.
    • Line 9 creates a string of the right size for output by adding 50 spaces then cutting off all at the end except for 15.
    • Line 11 does the same for the day.
    • Line 13 is very similar to line 7 except it searches pinput1, looks for the name at the start of the line and returns the phone number as the second field.
    • Line 15 just outputs the individual items.
    • Line 16 ends the loop.

    So there you have it, enough hints to (hopefully) fix up your own code, and a sample as to how a professional would do it :-).

    It would be wise to read up on the tools used, grep, tr, cut and awk.

  • Pax has given a good answer, but this code invokes fewer processes (11 vs a minimum of 56 = 7 * 8). It uses an auxilliary data file to give the days of the week and their sequence number.

    cat <<! >p2input3
    1 Monday
    2 Tuesday
    3 Wednesday
    4 Thursday
    5 Friday
    6 Saturday
    7 Sunday
    !
    
    sort +1 p2input3 > p2.days
    sort +1 p2input2 > p2.call
    join -1 2 -2 2 p2.days p2.call | sort +2 > p2.duty
    sort +0 p2input1 > p2.body
    join -1 3 -2 1 p2.duty p2.body | sort +2n | tr '[A-Z]' '[a-z]' |
    awk 'BEGIN { printf("%-14s %-14s %s\n", "Name", "On-Call", "Phone");
                 printf "\n"; }
               { printf("%-14s %-14s %s\n", $1, $2, $4);}'
    rm -f p2input3 p2.days p2.call p2.duty p2.body
    

    The join command is powerful, but requires the data in the two files in sorted order on the joining keys. The cat command gives a list of days and the day number. The first sort places that list in alphabetic order of day name. The second sort places the names of the people on duty in alphabetic order of day name too. The first join then combines those two files on day name, and then sorts based on user name, yielding the output:

    Wednesday 3 Alice
    Tuesday 2 Bob
    Monday 1 Carol
    Thursday 4 Dave
    Friday 5 Frank
    Saturday 6 Mary
    Sunday 7 Ted
    

    The last sort puts the names and phone numbers into alphabetic name order. The second join then combines the name + phone number list with the name + duty list, yielding a 4 column output. This is run through tr to make the data all lower case, and then formatted with awk, which demonstrates its power and simplicity nicely here (you could use Perl or Python instead, but frankly, that would be messier).

    Perl has a motto: TMTOWTDI "There's more than one way to do it".

    That often applies to shell scripting too.


    I suppose my code does not use a loop...oh dear. Replace the initial cat command with:

     for day in "1 Monday" "2 Tuesday" "3 Wednesday" "4 Thursday" \
                "5 Friday" "6 Saturday" "7 Sunday"
     do echo $day
     done > p2input3
    

    This now meets the letter of the rules.

  • Try this one:

    sort file1.txt > file1sort.txt
    sort file2.txt > file2sort.txt
    join file2sort.txt file1sort.txt | column -t > result.txt
    rm file1sort.txt file2sort.txt
    

Visual Studio 2008 "Add Service Reference" for Sharepoint: 401 and port numbers

I'm trying to "Add Service Reference" to SharePoint web services (e.g., "http://cogent-moss/_vti_bin/Webs.asmx"), but am having trouble. I seem to always get this error:

The document at the url http://cogent-moss/_vti_bin/Webs.asmx was not recognized as a known document type. The error message from each known type may help you fix the problem: - Report from 'http://cogent-moss/_vti_bin/Webs.asmx' is 'The document format is not recognized (the content type is 'text/html; charset=utf-8').'. - Report from 'DISCO Document' is 'Root element is missing.'. - Report from 'WSDL Document' is 'The document format is not recognized (the content type is 'text/html; charset=utf-8').'. - Report from 'XML Schema' is 'The document format is not recognized (the content type is 'text/html; charset=utf-8').'. Metadata contains a reference that cannot be resolved: 'http://cogent-moss/_vti_bin/Webs.asmx'. The HTTP request is unauthorized with client authentication scheme 'Anonymous'. The authentication header received from the server was 'Negotiate,NTLM'. The remote server returned an error: (401) Unauthorized. If the service is defined in the current solution, try building the solution and adding the service reference again.

I've scoured the web for solutions to this, and most of them are solutions to run-time problems. I merely want to get Visual Studio 2008 to generate the proxy classes for me.

What's strange to me is that if I try the very same thing, except pointed at "http://cogent-moss:8888/_vti_bin/Webs.asmx", it all works fine. Both of these IIS VirtualServers are Sharepoint Site Collections, configured by SharePoint, and both are configured for Windows authentication. What's going on that would make it work when I specify a port number, but not when I go to the default at port 80?

From stackoverflow
  • It sounds like you have an extended web application in SharePoint where port 80 is extended to port 8888 and port 80 allows anonymous but port 8888 requires Windows Authentication. Have you checked in IIS Manager to see if that is the case?

  • If it's not a question of authtentication types, as Kirk suggested, are your Alternate Access Mappings set up for both port 8888 and port 80?

  • Hello,

    I had exactly the same problem, i think you should try adding the service reference from VS as "http://cogent-moss:8888/_vti_bin/Webs.asmx?WSDL" instead of "http://cogent-moss:8888/_vti_bin/Webs.asmx"

    Hope that helps,

  • I had the same problem. I went to sharepoint site administration, application management. In the application security area, select authentication providers. On the new page, choose default, and on the new page select enable anonymous. Not sure if this worked yet as I am getting other problems, but it does seem to have removed the authentication issue. I'd already set up anonymous on the IIS VD, but was getting the same error, so I took a look around the sharepoint admin page.

    Hope this helps Nick

Parsing data from txt file in J2ME

Basically I'm creating an indoor navigation system in J2ME. I've put the location details in a .txt file i.e.

  • Locations names and their coordinates.
  • Edges with respective start node and end node as well as the weight (length of the node).

    I put both details in the same file so users dont have to download multiple files to get their map working (it could become time consuming and seem complex). So what i did is to seperate the deferent details by typing out location Names and coordinates first, After that I seperated that section from the next section which is the edges by drawing a line with multiple underscores.

    Now the problem I'm having is parsing the different details into seperate arrays by setting up a command (while manually tokenizing the input stream) to check wether the the next token is an underscore.

  • If it is, (in pseudocode terms), move to the next line in the stream, create a new array and fill it up with the next set of details.

    I found a some explanation/code HERE that does something similar but still parses into one array, although it manually tokenizes the input. Any ideas on what to do? Thanks

    Text File Explanation
    The text has the following format...

    <--1stSection-->
     /**
      * Section one has the following format
      * xCoordinate;yCoordinate;LocationName
      */

    12;13;New York City
    40;12;Washington D.C.
    ...e.t.c

    _________________________  <--(underscore divider)

    <--2ndSection-->
     /**
      * Its actually an adjacency list but indirectly provides "edge" details.
      * Its in this form
      * StartNode/MainReferencePoint;Endnode1;distance2endNode1;Endnode2;distance2endNode2;...e.t.c
      */

    philadelphia;Washington D.C.;7;New York City;2
    New York City;Florida;24;Illinois;71
    ...e.t.c

    From stackoverflow
    • package filereader;
      
      import java.io.IOException;
      import java.io.InputStream;
      import java.util.Hashtable;
      import java.util.Vector;
      
      public class FileReader {
          String locationSection;
          String edgeSection;
          Vector locations;
          Vector edges;
      
          public FileReader(String fileName) {
           // read the contents into the string
           InputStream is = getClass().getResourceAsStream(fileName);
           StringBuffer sb = new StringBuffer();
           int ch;
           try {
            while ((ch = is.read()) != -1) {
             sb.append((char) ch);
            }
           } catch (IOException e2) {
            e2.printStackTrace();
           }
           try {
            is.close();
           } catch (IOException e) {
            e.printStackTrace();
           }
           String text = sb.toString();
      
           // separate locations and edges
           String separator = "_________________________";
      
           // read location section, without last end-of-line char
           int endLocationSection = text.indexOf(separator) - 1;
           locationSection = text.substring(0, endLocationSection);
      
           // read edges section, without end-of-line char after separator
           int startEdgeSection = endLocationSection + separator.length() + 3;
           edgeSection = text.substring(startEdgeSection, text.length());
      
           // parse locations and edges
           locations = getLocationsVector(locationSection);
           edges = getEdgesVector(edgeSection);
          }
      
          // parse locations section
          public Vector getLocationsVector(String section) {
           Vector result = new Vector();
           int startLine = 0;
           int endLine = section.indexOf('\n');
           while (endLine != -1) {
            String line = section.substring(startLine, endLine);
            result.addElement(parseLocationsLine(line, ';'));
            startLine = endLine + 1;
            if (endLine == section.length() - 1)
             break;
            endLine = section.indexOf('\n', startLine);
            // if no new line found, read to the end of string
            endLine = (-1 == endLine) ? section.length() - 1 : endLine;
           }
           return result;
          }
      
          // parse edges section
          public Vector getEdgesVector(String section) {
           Vector result = new Vector();
           int startLine = 0;
           int endLine = section.indexOf('\n');
           while (endLine != -1) {
            String line = section.substring(startLine, endLine - 1);
            result.addElement(parseEdgesLine(line, ';'));
            startLine = endLine + 1;
            if (endLine == section.length() + 1)
             break;
            endLine = section.indexOf('\n', startLine);
            // if no new line found, read to the end of string
            endLine = (-1 == endLine) ? section.length() + 1 : endLine;
           }
           return result;
          }
      
          // parse locations line
          public Hashtable parseLocationsLine(String value, char splitBy) {
           Hashtable result = new Hashtable();
           int xCEnd = value.indexOf(splitBy);
           int yCEnd = value.indexOf(splitBy, xCEnd + 1);
           result.put("x", value.substring(0, xCEnd));
           result.put("y", value.substring(xCEnd + 1, yCEnd));
           result.put("location", value.substring(yCEnd + 1, 
            value.length() - 1));
           return result;
          }
      
          // parse edges line
          public Hashtable parseEdgesLine(String value, char splitBy) {
           Hashtable result = new Hashtable();
           int snEnd = value.indexOf(splitBy);
           result.put("startnode", value.substring(0, snEnd));
           int n = 1;
           int start = snEnd + 1;
           int enEnd = value.indexOf(splitBy, snEnd + 1);
           int dstEnd = value.indexOf(splitBy, enEnd + 1);
           while (enEnd != -1 && dstEnd != -1) {
            result.put("endnode" + String.valueOf(n), 
              value.substring(start, enEnd));
            result.put("distance" + String.valueOf(n), value.substring(
              enEnd + 1, dstEnd));
            start = dstEnd + 1;
            enEnd = value.indexOf(splitBy, start);
            if (dstEnd == value.length())
             break;
            dstEnd = value.indexOf(splitBy, enEnd + 1);
            // if last endnode-distance pair, read to the end of line
            dstEnd = (-1 == dstEnd) ? value.length() : dstEnd;
            n++;
           }
           return result;
          }
      
          // getters for locations and edges
          public Vector getLocations() {
           return locations;
          }
      
          public Vector getEdges() {
           return edges;
          }
      
      }
      

      and somewhere in application screen:

      fr = new FileReader("/map.txt");
      Vector vct1 = fr.getLocations();
      for (int i = 0; i < vct1.size(); i++) {
       Hashtable location = (Hashtable) vct1.elementAt(i);
       Enumeration en = location.keys();
       String fv = "";
       while (en.hasMoreElements()) {
        String key = (String) en.nextElement();
        String value = (String)location.get(key);
        fv = fv + value + "-";
       }
       this.add(new LabelField(fv));  
      
      }
      Vector vct2 = fr.getEdges();
      for (int i = 0; i < vct2.size(); i++) {
       Hashtable location = (Hashtable) vct2.elementAt(i);
       Enumeration en = location.keys();
       String fv = "";
       while (en.hasMoreElements()) {
        String key = (String) en.nextElement();
        String value = (String)location.get(key);
        fv = fv + value + "-";
       }
       this.add(new LabelField(fv));  
      
      }
      

      it will be easy to get values from hashtable by keys:
      (String)location.get("x")
      (String)location.get("y")
      (String)location.get("location")
      (String)edge.get("startnode")
      (String)edge.get("endnode1")
      (String)edge.get("distance1")
      (String)edge.get("endnode2")
      (String)edge.get("distance2")
      ...

    ASP.NET-MVC (IIS6) Error on high traffic: Specified cast is not valid

    Hello!

    I just launched my tiny webapp on my humble dedicated server (Win2003)... running ASP.NET MVC, LINQ2SQL, SQL Express 2005, and IIS6 (setup with wildcard mapping)

    The website runs smoothly 90% of the times. However, on relatively high traffic, LINQ2SQL throws the error: Specified cast is not valid

    This error is ONLY thrown at high traffic. I have NO IDEA how or exactly why this happens. Caching did not remove this problem entirely.

    Anyone seen this problem before? are there any secret SQL Server tweaking I should've done? Or at least, any ideas on how to diagnose this issue? because i'm out!

    Naimi

    Stacktrace (from Event Log):

    at System.Data.SqlClient.SqlBuffer.get_SqlGuid()
       at System.Data.SqlClient.SqlDataReader.GetGuid(Int32 i)
       at Read_Friend(ObjectMaterializer`1 )
       at System.Data.Linq.SqlClient.ObjectReaderCompiler.ObjectReader`2.MoveNext()
       at Dudlers.Web.Models.DudlersDataContext.GetFriendRequests(Guid userId) in C:\Web\Models\DudlersDataContext.cs:line 562
       at Dudlers.Web.Controllers.BaseController.View(String viewName, String masterName, Object viewData) in C:\Web\Controllers\BaseController.cs:line 39
       at System.Web.Mvc.Controller.View(String viewName)
       at Dudlers.Web.Controllers.CatController.Index() in C:\Web\Controllers\CatController.cs:line 25
       at lambda_method(ExecutionScope , ControllerBase , Object[] )
       at System.Web.Mvc.ActionMethodDispatcher.Execute(ControllerBase controller, Object[] parameters)
       at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(MethodInfo methodInfo, IDictionary`2 parameters)
       at System.Web.Mvc.ControllerActionInvoker.c__DisplayClassb.b__8()
       at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodFilter(IActionFilter filter, ActionExecutingContext preContext, Func`1 continuation)
       at System.Web.Mvc.ControllerActionInvoker.c__DisplayClassb.c__DisplayClassd.b__a()
       at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodWithFilters(MethodInfo methodInfo, IDictionary`2 parameters, IList`1 filters)
       at System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName)
       at System.Web.Mvc.Controller.ExecuteCore()
       at System.Web.Mvc.ControllerBase.Execute(RequestContext requestContext)
       at System.Web.Mvc.ControllerBase.System.Web.Mvc.IController.Execute(RequestContext requestContext)
       at System.Web.Mvc.MvcHandler.ProcessRequest(HttpContextBase httpContext)
       at System.Web.Mvc.MvcHandler.ProcessRequest(HttpContext httpContext)
       at System.Web.Mvc.MvcHandler.System.Web.IHttpHandler.ProcessRequest(HttpContext httpContext)
       at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()
       at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)
    
    From stackoverflow
    • Sounds like maybe a race condition, or perhaps a rare bug that is only correlated with high traffic because that when most of your requests occur.

    • We had a similar problem with LINQ that we get "Unable to cast object of type 'System.Int32' to type 'System.String'" and "Specified cast is not valid."

      Examples of stacktraces

      System.InvalidCastException: Unable to cast object of type 'System.Int32' to type 'System.String'.
         at System.Data.SqlClient.SqlBuffer.get_String()
         at System.Data.SqlClient.SqlDataReader.GetString(Int32 i)
         at Read_Person(ObjectMaterializer`1 )
         at System.Data.Linq.SqlClient.ObjectReaderCompiler.ObjectReader`2.MoveNext()
         at System.Collections.Generic.List`1..ctor(IEnumerable`1 collection)
         at System.Linq.Enumerable.ToList[TSource](IEnumerable`1 source)
         at RF.Ias.Services.Person.BusinessLogic.PersonTransactionScripts.GetPersons(IEnumerable`1 personIds, Boolean includeAddress, Boolean includeContact)
         at CompositionAopProxy_5b0727341ad64f29b816c1b73d11dd44.GetPersons(IEnumerable`1 personIds, Boolean includeAddress, Boolean includeContact)
         at RF.Ias.Services.Person.ServiceImplementation.PersonService.GetPersons(GetPersonRequest request)
      
      
      System.InvalidCastException: Specified cast is not valid.
         at System.Data.SqlClient.SqlBuffer.get_Int32()
         at System.Data.SqlClient.SqlDataReader.GetInt32(Int32 i)
         at Read_GetRolesForOrganisationResult(ObjectMaterializer`1 )
         at System.Data.Linq.SqlClient.ObjectReaderCompiler.ObjectReader`2.MoveNext()
         at System.Collections.Generic.List`1..ctor(IEnumerable`1 collection)
         at System.Linq.Enumerable.ToList[TSource](IEnumerable`1 source)
         at RF.Ias.Services.Role.DataAccess.RoleDataAccess.GetRolesForOrganisation(GetRolesForOrganisationCriteria criteria, Int32 pageIndex, Int32 pageSize, Int32& recordCount)
         at RF.Ias.Services.Role.BusinessLogic.RoleTransactionScripts.GetRolesForOrganisation(GetRolesForOrganisationCriteria criteria, Int32 pageIndex, Int32 pageSize, Int32& recordCount)
         at CompositionAopProxy_4bd29c6074f54d10a2c09bd4ab27ca66.GetRolesForOrganisation(GetRolesForOrganisationCriteria criteria, Int32 pageIndex, Int32 pageSize, Int32& recordCount)
         at RF.Ias.Services.Role.ServiceImplementation.RoleService.GetRolesForOrganisation(GetRolesForOrganisationRequest request)
      

      We used to get these exceptions if we first got a exception like this "System.InvalidOperationException: There is already an open DataReader associated with this Command which must be closed first." or " A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.)".

      The first exception occur for a different instance of the DataCOntext then for all those that then are following.

      After some research and asking in this thread , I found that the reason was that I did not dispose the DataContexts. After I started to do that, it dissappered.

      asgerhallas : Are you disposing your datacontext before creating a new instance?
      techphoria414 : Thank you. I had similar issues and this fixed me up.

    Using fseek to backtrack

    Is using fseek to backtrack character fscanf operations reliable?

    Like for example if I have just fscanf-ed 10 characters but I would like to backtrack the 10 chars can I just fseek(infile, -10, SEEK_CUR) ?

    For most situations it works but I seem to have problems with the character ^M. Apparently fseek registers it as a char but fscanf doesn't register it, thus in my previous example a 10 char block containing a ^M would require fseek(infile, -11, SEEK_CUR) instead. fseek(infile, -10, SEEK_CUR) would make bring it short by 1 character.

    Why is this so?

    Edit: I was using fopen in text mode

    From stackoverflow
    • Fseek has no understanding of the file's contents and just moves the filepointer 10 characters back.

      fscanf depending on the OS, may interpret newlines differently; it may even be so that fscanf will insert the ^M if you're on DOS and the ^M does not appear in the file. Check your manual that came with your C compiler

    • This is because fseek works with bytes, whereas fscanf intelligently handles that the carriage return and line feed are two bytes, and swallows them as one char.

      Yew Long : Yes, I think you're right; This matches observation. I forgot to consider text and binary modes, my fopen defaulted to text mode if I'm not wrong
      R.. : I would question the use of the word "intelligently". How much harder is it to just process both `\r` and `\n` yourself in binary mode? And that way you get uniform behavior across systems (for example if your program is running on unix but someone throws a DOS text file full of `\r`'s at it, it will still work). I always go with "text mode considered harmful".
      justinhj : Sounds like you're saying rather than use the built in functionality of the library you'd duplicate it yourself, because it's not hard. By that logic why use any libraries?
    • Just tried this with VS2008 and found that fscanf and fseek treated the CR and LF characters in the same way (as a single character).

      So with two files:

      0000000: 3132 3334 3554 3738 3930 3132 3334 3536 12345X7890123456

      and

      0000000: 3132 3334 350d 0a37 3839 3031 3233 3435 12345..789012345

      If I read 15 characters I get to the second '5', then seek back 10 characters, my next character read is the 'X' in the first case and the CRLF in the second.

      This seems like a very OS/compiler specific problem.

    • Did you test the return value of fscanf? Post some code.

      Take a look at ungetc. You may have to run a loop over it.

    • You're seeing the difference between a "text" and a "binary" file. When a file is opened in text mode (no 'b' in the fopen second argument), the stdio library may (indeed, must) interpret the contents of the file according to the operating system's conventions for text files. For example, in Windows, a line ends with \r\n, and this gets translated to a single \n by stdio, since that is the C convention. When writing to a text file, a single \n gets output as \r\n.

      This makes it easier to write portable C programs that handle text files. Some details become complicated, however, and fseeking is one of them. Because of this, the C standard only defines fseek in text files in a few cases: to the very beginning, to the very end, to the current position, and to a previous position that has been retrieved with ftell. In other words, you can't compute a location to seek to for text files. Or you can, but you have to take care of the all the platform-specific details yourself.

      Alternatively, you can use binary files and do the line-ending transformations yourself. Again, portability suffers.

      In your case, if you just want to go back to where you last did fscancf, the easiest would be to use ftell just before you fscanf.

      Yew Long : Thanks, I didn't know about ftell... definitely a better way to implement than to fseek manually
    • hi all,

      I am very new to C programming.

      I have this code.I am just unable to get through the actual working of this code. can anybody help me on this in detail

      { FILE *fp; char *ptr; long len=0;

      if((fp = fopen(file, "r")) == NULL) { printf("ERROR OPENING IMAGE FILE %s....\n",file); return 0; } fseek (fp,0,2); printf("ftel returns..%d..\n",ftell(fp)); *p_Data = (str_Msg_t *)malloc((int)ftell(fp) + 5); printf("allocated ptr_Data=(%p)\n",*p_Data); fclose(fp); fp = fopen(file,"r"); ptr = (char )((p_Data)->buf); while((*ptr++ = fgetc(fp)) != EOF) len++; fclose (fp); *--ptr = '\0'; printf("READ +++++ (%d,%d)++++++++\n",len,strlen((const char )((p_Data)->buf))); printf("(%10.10s)",((*p_Data)->buf)); (*p_Data)->len = len; return 1; }

      thanks in advance, haris