• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Programming |OT| C is better than C++! No, C++ is better than C

Slo

Member
I imagine you'll be asked about how you would test a new feature, in which case, talk about edge cases. You'll also be expected to know testing terms like smoke testing, instrumentation testing, etc.

Be familiar with the stages of the SDLC (Software Development Life Cycle), and where a QA tester would fit in.

Be able to describe the differences between Unit Testing, Functional Testing, Acceptance Testing, Regression Testing, etc.

Perhaps research the difference between Waterfall and Agile development.

Be prepared for open ended questions like "what should a well written test case consist of?"
 
Yes, the purpose of intptr_t is to have an integer that is guaranteed to be big enough to hold a pointer (pointers are different sizes on different systems). So the point is you can use it to perform pointer arithmetics. This is useful if you're writing your own memory allocator. Other than that you probably shouldn't do it.

Nothing in particular. You get the adress just like usual.

Well, when you're comparing an array, you're not comparing a value with another. You're comparing multiple values. The only way to do that is with a loop*. Or just use strcmp to do it for you.

* If it's just 4 bytes you could technically do some clever casting but I would advice against it. It would make the code far less maintainable (less readable and now the array has to be 4 bytes or it won't work).

Assume you meant memcmp. Note this only works for array element types where std::is_trivially_copyable is true. Otherwise you need to use std::equal() (which you should probably do instead of memcmp Anyway)
 

midramble

Pizza, Bourbon, and Thanos
Sooo I'm back here again because I've realized yet again I am behind the curve on modern best practices for a project I'm working on.

Short of it:
Need to possibly rebuild my backend. What is the best way (framework that is) of implementing a java backend these days?

Long of it:
I have a webapp I'm building that originally is built using JSP/Tomcat Servlets. Raw JS front end with JSP and whatnot that makes calls to java servlets in the backend.

I recently updated the frontend using react/redux to get all the nice and shiny features that comes with that, though now I need to update my backend. Now I've learned that pretty much no one uses servlets anymore and I'm stuck looking for enterprise best practices for Java backend.

I want to maintain the connection as a REST api, though I want to transfer from standard forms to JSON. The backend doesn't need much rewriting, I just need to find a more modern framework to serve the backend. I've seen re-occuring themes of Play, Struts, and Spring but I don't know if these are already on the out and out or up and up. Since I'm currently a one man team, the less complicated the framework structure the better. Getting comfortable with the entire redux structure was heavy to begin with, relearning a complex backend framework on top of that will probably ruin my diet.

All that being said, am I still on the wrong path? Are JS backends the future and I should rewrite for node.js? Am I talking complete gibberish and only think I know what I'm talking about? Is this a question for the web-programming community instead?
 
Any chance you could convert it to a WebAPI backend in C#? It's not that far off from Java

I think you'd just be better of doing it in Spring Boot if you know Java already and have a lot of servlet code that is tested and works and you can use to port over some logic. The docs and community support are there and it's provided you are doing things The Normal Way (and if you're just one person, that's probably the case) everything more or less just works.

Shoutout to Dropwizard for REST APIs too though. It's not so much a framework as it is a way of putting together the good Java REST backend bits together without having to buy whole ham into a framework ecosystem.
 
Just finished an associates in computer programming and I have over a year experience with SQL and various OOP languages... Can't find a job. Everything seems to require so much experience that I don't have or knowledge of so many technologies.

Really discouraged.

I guess the only thing I can do is build a portfolio of various things?

Any wisdom would be greatly appreciated.
 

Makai

Member
Just finished an associates in computer programming and I have over a year experience with SQL and various OOP languages... Can't find a job. Everything seems to require so much experience that I don't have or knowledge of so many technologies.

Really discouraged.

I guess the only thing I can do is build a portfolio of various things?

Any wisdom would be greatly appreciated.
It's a really inefficient system. Just blast your generic resume and cover letter to a bunch of jobs on Indeed. Don't spend more than a minute on any of them. It doesn't even matter what experience or skills it asks for, just send it. Then find companies you actually like and apply directly to their website and hand tailor your cover letter for them.
 
It's a really inefficient system. Just blast your generic resume and cover letter to a bunch of jobs on Indeed. Don't spend more than a minute on any of them. It doesn't even matter what experience or skills it asks for, just send it. Then find companies you actually like and apply directly to their website and hand tailor your cover letter for them.

Okay I'll do that. Thanks for the advice!
 

midramble

Pizza, Bourbon, and Thanos
Any chance you could convert it to a WebAPI backend in C#? It's not that far off from Java

Worried my unfamiliarity with C# syntax would make this an unmitigated disaster.

No, it's a highly employable fad.

The web dev thread is a better fit for this question.

My highly subjective opinion is that Play is good and Spring is a maintenance disaster.

That's kind of what I had figured (about js backend). Would be easy for the multitude of front end devs out there to incorporate them into a full stack team but at, what seems to me, would be a big performance loss.

I did more research into Spring and I'm glad you brought this up. I was really leaning that direction because it seems to have a pretty big sector foothold. That being said I would hate to start investing time into a framework that ends up being a massive time sink when something would have been a better fit. Best parallel I can make was my decision to run with react/redux as opposed to angular. From what I understand angular would have been more robust, but react/redux was plenty enough to support my needs without spending all my time learning the framework. The parallel being Angular is to Spring what react is to Play? (probably oversimplifying or possibly way off all together)

That being the case I'm going to dive a bit deeper into Play as this is the third recommendation I've gotten for it.

To clarify my scope, what I'm building is essentially (mechanics wise) a social platform. Users, groups, general objects tied to groups and users, searches, purchases, forum-esque communication, permissions, and whatnot. All backend accessed by java REST API. All frontend react/redux.
 

Kalnos

Banned
That's kind of what I had figured (about js backend). Would be easy for the multitude of front end devs out there to incorporate them into a full stack team but at, what seems to me, would be a big performance loss.

Here's the question, how much traffic do you actually get? There are a lot of websites out their built with Node that probably get way more traffic than you do. The JVM may be faster but if the performance of Node is good enough then it really doesn't matter and in some cases could be preferable (particularly if you develop faster in Node).

If performance is your goal you can always look at other options like Golang as well.
 
Here's the question, how much traffic do you actually get? There are a lot of websites out their built with Node that probably get way more traffic than you do. The JVM may be faster but if the performance of Node is good enough then it really doesn't matter and in some cases could be preferable (particularly if you develop faster in Node).

If performance is your goal you can always look at other options like Golang as well.

To be honest, I think the performance gap between Node and JVM is pretty small. Modern Javascript interpreters are really fast. Also, with most web applications, the bottleneck is going to be I/O, not memory or even CPU. But if there's an existing application in Java, I'd probably just migrate that application to a new framework like Spring Boot or Dropwizard (both are very good and have nice documentation). I've used Play Framework a couple of years ago (from Scala, not Java) and back then I found the documentation to be really lacking and confusing in spots. Never had that issue with Spring, for instance. Might be different now and using it was fine for the most part.

For Spring, version 5.0 is coming up in the next couple of months and it looks like there's going to be some nice additions (a async-first web framework that you can use instead of/along with Web MVC).
 

Koren

Member
To be honest, I think the performance gap between Node and JVM is pretty small.
I'm quite new to Node.JS and quite foreign to Java (I'm not really liking either) but aren't those two quite different, Java being far more strict?

I wouldn't have expected the choice between Java and JS being usually based on performances (and I was under the feeling that the difference in performance is indeed quite small)
 
I assume you're exaggerating, but could you elaborate on this?
There was a little exaggeration. It's really popular at the moment because it lets web devs transition into backend without having to learn a new language or libraries (YMMV on this last bit) but node is no better or worse than other async backend frameworks and I wouldn't bet on its persistence as a platform.
 

midramble

Pizza, Bourbon, and Thanos
Here's the question, how much traffic do you actually get? There are a lot of websites out their built with Node that probably get way more traffic than you do. The JVM may be faster but if the performance of Node is good enough then it really doesn't matter and in some cases could be preferable (particularly if you develop faster in Node).

If performance is your goal you can always look at other options like Golang as well.

Actual traffic now? Nearly negligible. The point though is to prepare for scalability. I mean this not on just a data performance side but in a development performance side. My own fault for vaguely using the term for both.

To be honest, I think the performance gap between Node and JVM is pretty small. Modern Javascript interpreters are really fast. Also, with most web applications, the bottleneck is going to be I/O, not memory or even CPU. But if there's an existing application in Java, I'd probably just migrate that application to a new framework like Spring Boot or Dropwizard (both are very good and have nice documentation). I've used Play Framework a couple of years ago (from Scala, not Java) and back then I found the documentation to be really lacking and confusing in spots. Never had that issue with Spring, for instance. Might be different now and using it was fine for the most part.

For Spring, version 5.0 is coming up in the next couple of months and it looks like there's going to be some nice additions (a async-first web framework that you can use instead of/along with Web MVC).

That's a valuable perspective. More specifically the bottleneck being on the cost of I/O, though, hopefully, that is covered in the business model if it scales financially as projected.

Also, you now have me back on the fence between Play and Spring hahaha.

My performance issues aren't necessarily just with runtime, but with the development pipeline/workflow/architecture. I prefer classically backend languages because of the easy and inherent tools for abstraction, documentation, and modularity. I may be wrong but I feel it's much easier to compartmentalize development in a team with Java. This will probably change in the future with the popularity and iteration of JS libraries though at that point the framework and language may be so different that it feels like a different language in its own right. That and, correct me if I'm wrong, there seems to be pretty differing mindsets between front end and back end developers, to the point where methodologies of a back end doing front end work or vise versa ends with abnormal code structure and friction within the respective teams. (This is mostly second hand knowledge to me as I've usually only managed infrastructure/operations for dev teams and only recently been tasked with overseeing one)

To clarify, I'm a IT Ops Manager by trade, and hobby programmer at home. Trying to get a personal project off the ground while maintaining the day job.
 
I did more research into Spring and I'm glad you brought this up. I was really leaning that direction because it seems to have a pretty big sector foothold. That being said I would hate to start investing time into a framework that ends up being a massive time sink when something would have been a better fit.
You know all those jokes about abstract factory bean proxy factories? Those come from Spring lol.

That being the case I'm going to dive a bit deeper into Play as this is the third recommendation I've gotten for it.

To clarify my scope, what I'm building is essentially (mechanics wise) a social platform. Users, groups, general objects tied to groups and users, searches, purchases, forum-esque communication, permissions, and whatnot. All backend accessed by java REST API. All frontend react/redux.
If I can ask, any reason you really want to use Java? Just familiarity? Or do you have some dependencies on some Java libs?
 
Honestly for your use case just pick of the frameworks here and just run with it. They'll all more or less work out for you, and you'll be able to move over your existing backend logic with mostly refactoring instead of full rewrites.

You know all those jokes about abstract factory bean proxy factories? Those come from Spring lol.

I think the real originator is all the CORBA wackiness that was out in the open when J2EE first went out. At least we got stuff like FizzBuzz Enterprise Edition out of it. Oddly enough, Spring was a response to J2EE! And yet it still had "great" things like a XML config layer and exposed a "yo dawg I heard you like design patterns so I put a pattern in your pattern" design philosophy.

Spring Boot more or less cuts out on all that baggage and just lets you fling Spring annotations around to do most of the work and your actual class/method code is working with POJOs. There's more work when you have to go off the on-rails paths, but if you're a one-man project you shouldn't need to do so ( the project probably is fine using most of Spring Data REST ) very much, if at all.

Anyway the earlier poster should just run with something and start coding. I think Boot's awesome for greenfield stuff and doesn't deserve to be lumped in with how Spring was a decade ago, when we had collective trauma from EJBs into heavyweight appserver containers and were desperate for any way out.
 
Honestly for your use case just pick of the frameworks here and just run with it. They'll all more or less work out for you, and you'll be able to move over your existing backend logic with mostly refactoring instead of full rewrites.



I think the real originator is all the CORBA wackiness that was out in the open when J2EE first went out. At least we got stuff like FizzBuzz Enterprise Edition out of it. Oddly enough, Spring was a response to J2EE! And yet it still had "great" things like a XML config layer and exposed a "yo dawg I heard you like design patterns so I put a pattern in your pattern" design philosophy.

Spring Boot more or less cuts out on all that baggage and just lets you fling Spring annotations around to do most of the work and your actual class/method code is working with POJOs. There's more work when you have to go off the on-rails paths, but if you're a one-man project you shouldn't need to do so ( the project probably is fine using most of Spring Data REST ) very much, if at all.

Anyway the earlier poster should just run with something and start coding. I think Boot's awesome for greenfield stuff and doesn't deserve to be lumped in with how Spring was a decade ago, when we had collective trauma from EJBs into heavyweight appserver containers and were desperate for any way out.

I agree with this. I've worked on a legacy Spring application at work (written ~10 years ago at the height of the XML craze) and I've written Spring Boot apps myself and it's two different worlds.
 

midramble

Pizza, Bourbon, and Thanos
You know all those jokes about abstract factory bean proxy factories? Those come from Spring lol.

Yeesh. This is the kind of thing I was worried about. Don't want to get lost in the sauce with overly specific thousand layer subclasses. (I have a bad habit of browsing rare classes for fun and then usually deciding they are unnecesary)

If I can ask, any reason you really want to use Java? Just familiarity? Or do you have some dependencies on some Java libs?

Mainly familiarity. Java has been my bread and butter for a good long while. Though I also have a few dependencies like jbcrypt for data at rest and whatever I used for SQL queries.

I agree with this. I've worked on a legacy Spring application at work (written ~10 years ago at the height of the XML craze) and I've written Spring Boot apps myself and it's two different worlds.

Honestly for your use case just pick of the frameworks here and just run with it. They'll all more or less work out for you, and you'll be able to move over your existing backend logic with mostly refactoring instead of full rewrites.



I think the real originator is all the CORBA wackiness that was out in the open when J2EE first went out. At least we got stuff like FizzBuzz Enterprise Edition out of it. Oddly enough, Spring was a response to J2EE! And yet it still had "great" things like a XML config layer and exposed a "yo dawg I heard you like design patterns so I put a pattern in your pattern" design philosophy.

Spring Boot more or less cuts out on all that baggage and just lets you fling Spring annotations around to do most of the work and your actual class/method code is working with POJOs. There's more work when you have to go off the on-rails paths, but if you're a one-man project you shouldn't need to do so ( the project probably is fine using most of Spring Data REST ) very much, if at all.

Anyway the earlier poster should just run with something and start coding. I think Boot's awesome for greenfield stuff and doesn't deserve to be lumped in with how Spring was a decade ago, when we had collective trauma from EJBs into heavyweight appserver containers and were desperate for any way out.

Thanks guys. I'll take a couple of swings at Play and Spring Boot, get a feel for how quickly I can turn up and work from there. Thanks again.
 

theecakee

Member
I've fiddled a little with Spring, Play and also Jersey.

I found all the Java frameworks for REST apps to just be dense and confusing at points. I liked Play the most though...but I didn't make very large apps with any of them just tested them out. Personally I like Flask with Python cause it's just bare bones simple and let me handle setting most of it up. Earlier this week wrote this REST api with it for a co-worker and I to use.

Switching around from languages isn't bad at all.
 
I didn't find a "Programming Help Thread" so I'm not sure if one exists which would be more appropriate, but I'm currently working in Mathematica using the Wolfram Language (obviously) and I've a problem I'm hoping somebody could shed some insight on.

I'm not formally trained in Wolfram (I basically spent the last two days going through forty chapters in "An Elementary Introduction to the Wolfram Language") and my previous programming experience consists of C++ and Python but isn't exceptionally extensive and Wolfram is very different, so I'm not too sure how to proceed here.

It's worth noting I have to use Mathematica for this (Python, which I would rather use, is not an option as it has very little support for what the overall question I'm tackling is; this is just a very small segment I've run into difficulty with)

Basically I have:

-A set - Say {a,b,c,d}

-A function similar to - F(a, b): {a} - {b} + {a/b} = 0 where {a} - {b} can only be added if they're the same (a=b). a and b are elements of a finite ring (they're not real or complex numbers)

Basically, I want:

-To be able to test every combination of non equal elements, so:
F(a,b}, F{a,c}, F(a,d),..., F(d,b) , F(d,c)

-To find the coefficient of {a}, {b}, {c}, {d} in each function and put the coefficient in a vector (v1,v2,v3,v4)

-To put those vectors in a matrix and solve the system.


What I've done:
In Wolfram, I implemented this function as (note: it depends on a third parameter, which denotes the Ring, but it's not relevant to my query):

Code:
F[r, a_, b_] := f[a] - f[b] + f[a/b] (*I wrapped It in the f because I don't want it to add a and b unless they're the same*)

I tested all cases by:

Code:
AllTest[r_] := Table[F[r,i,j], {i,2,r-1}, {j,2,r-1}] 

Test[r_] := DeleteCases[ Flatten[ AllTest[r] ], f[1] ] (*Note that this works because when a=b, f{a} - f{a} + f{1}*)

My question is:

Test[r_] works. It's giving me the correct list of elements. The issue is I don't know how to go from there to turning the table into a set of coefficients corresponding to how many f{a}, f{b}, f{c] and f{d] are present. Does anybody know how to do this and to put it into a vector?

To make that clearer and more illuminating, an example:

A result from Test[r_] might look something like this:

{ f[d], 2*f - f[c], f - f[c] + f[a] }

What I want it to convert that to would be:

{ {0,0,0,1}, {0,2,-1,0}, {1,1,-1,0} } (because in the first element of the set there are zero f{a}, zero f{b}, zero f{c}, one f{x}; in the second element of the set there are zero f{a}, two f{b}, minus one f{c}, and zero f{d}).

Once it's like that I'm sorted because it's going to be obvious how to put that in a matrix (It basically already is) but I just cannot find a way to that. I've tried CoefficientArray and set the variables to f[a],f,f[c],f[d] but I just cannot get it working. Does anybody know how to go about doing this?

I'm not sure if I should post here or the Maths Help thread so apologies if it should be in the other.
 

Koren

Member
I'm not sure if I should post here or the Maths Help thread so apologies if it should be in the other.
You can try both sides, I don't think anybody will be annoyed with this, but I'm not sure many people here use Wolfram... (the language, I mean)

I haven't played with this since a LONG time, and I can't test know...

Possible dumb suggestions from someone rusty. Can't you define f as Id_a (the identification function, something like
Code:
f[x_] := if[ x==a ; 1 ; 0 ]
evaluate the list (thus getting the coefficients for f[a] in each position), redefine f as Id_b, and so on?


Edit: looking at CoefficientArray documentation, I'd say it should work, but f[a] most probably isn't seen as a variable (is your "a" a variable or an immediate value, like an integer?)... A variation could be to convert a, b, c, d into x, y, z, t. But that would suppose you have a good knowledge about the set, and its size is somehow limited.
 
You can try both sides, I don't think anybody will be annoyed with this, but I'm not sure many people here use Wolfram... (the language, I mean)

I haven't played with this since a LONG time, and I can't test know...

Possible dumb suggestions from someone rusty. Can't you define f as Id_a (the identification function, something like
Code:
f[x_] := if[ x==a ; 1 ; 0 ]
evaluate the list (thus getting the coefficients for f[a] in each position), redefine f as Id_b, and so on?


Edit: looking at CoefficientArray documentation, I'd say it should work, but f[a] most probably isn't seen as a variable (is your "a" a variable or an immediate value, like an integer?)... A variation could be to convert a, b, c, d into x, y, z, t. But that would suppose you have a good knowledge about the set, and its size is somehow limited.

This definitely seems like a good idea, so I'm going to try approach it from this angle and see if it works.

To talk about the edit, it's... complicated and quite long. My explanation above simplified what is actually happening, which is what makes this a bit trickier.

Basically, I'm working over Finite Fields, so all of the elements in the set are the elements in the Finite Field which aren't equal to 1 or 0, and the function F(a,b) is actually considerably longer.

When the number of elements in the field is prime, we have an easy description of all the elements (it's just {0,1,2,3,...,p-1} where p is the size of the field, since Fp = Z/pZ). We can write these as {a,0} (i.e. a+px=a+0x; since x-p=0).

When the number of elements in the Field is a power of a prime though, say p^n, it gets more complicated, because in this Case Fp^n is isomorphic Z[x] quotient by a polynomial satisfying certain conditions. Since it's quotiented by some polynomial, we get something like a+bx+xc^2 +...+zx^n and we can write it as {a,b,c,d,...,z,0}. It is not trivial to determine certain details in this situation. A simple example would be that in F9 = F3/<x^2+x+2> = 0,1,2,x,2x,1+x,1+2x,2+x,2+2x}, it's not easy to say what is {1,1}^(-1) = 1/(1+x)= (some element in the field, but we want to know what it is).

[To give an Example, in F9 we have {0,1,2,x,2x,1+x,1+2x,2+x,2+2x}. The inverse of (1+x), i.e. (1/(1+x)), is then (2+x) (since (1+x)*(2+x) = 2 + 3x + x^2 = 2 + 0 + (-1) = 2 + 2 = 4 = 1) but it's not 'obvious' that this is the case. In addition we can represent everything in the form a + bx where a and b are mod 3, so everything be represented as {a,b}]

For the moment, my code is only looking at the case where p is prime, so in truth, I could work with integers mod p and I think it should not, necessarily, be too tricky to implement.

However, I'm going to need to (after I get it working for primes) get it working for powers of primes too. Because of that, I'm trying to write the code as general as I can (using the finite field package) so it will end up 'easier' to apply it to fields of an arbitrary size. At the moment, every vector is simply {a,0}. After I make sure I can get it working for this case though, I'll need to set a function to make the vector size be sufficiently long, create a function that successfully enumerates all possible combinations in the vector so that I have a description of every element in the field, and then once I have that basically go through all of the combinations of elements in the field.

For this reason, I want to try and keep it as general as possible and avoid using specific values in any of the formulas, or any specific logical checks that depend on an actual value, since the value it's checking will also need to be written in terms of the finite field unless you're specifically defining it to be a real number .

I hope that may slightly clarify the actual context of the questions and may make it a little more clear what some of the challenges are in constructing any logical checks here.

EDIT: I should also clarify another approach that's possible here when dealing with Finite Fields is that because every Finite Field has at least one generator (Every element in Fq* can be written as a^b for some natural number b) we could, in theory, represent every element as the power of a generator. The issue here is that the problem of finding a generator is tough (very much so: https://arxiv.org/pdf/1304.1206v4.pdf), and there are more than one, so we couldn't necessarily specific what the generator would be in any logical checks and it would have to be kept very general.

EDIT: Cpp, will do! I was hoping somebody here would be really familiar with Mathematica, or there's a really quick solution (such as the if Koren's suggestion works). If that suggestion isn't working I'll go to Stack Exchange then, thanks!
 

Koren

Member
I feel bad for making you type all this... That's really interesting to read (I also deal with finite fields from time to time, I implemented them again in Python recently to implement a QRCode decoder) but I wish I could bring you more help...

Unfortunately, I'm really rusty in Mathematica, as I said. I would at least need to do some tests, but I lack a license here (maybe at work, not sure... I've definitively seen the software in the cellar, but it may be an old floppy version installed nowhere ^_^ In fact, it probably is).

Yes, Stackexchange seems the way to go. They're really helpful (the only problem I have with SE is that it's really tricky to help others there, unless it's a REALLY tricky question on an obscure module, people reply faster than you can type ;) )
 

Ledbetter

Member
Finally, I have one semester left of CS. So naturally, I start having fears about the difficulty of getting a job when I get out, and of course doubts about myself (that impostor syndrome, of course).

I've got the CLRS Algorithms book to study this summer, as I've got nothing to do in the mornings except working out. I also work within the IT department of my university 4 hours daily with some Java based administrative website that they're making for students and teachers (it's mandatory to students of almost all public universities here in Mexico to do 480 hours of free work) so I could probably put that on my resume.

I'm working on a project that my university requires me to do in order to graduate as well, which is a Django website that classifies the neighborhoods of a city by security levels and crime types using AI based on the crimes reported by citizens (it is kinda simple, actually). I chose that because I'm actually interested in machine learning and I'd love to work with something related to that, but I understand it might be hard for a new grad to inmediately start working on a field like that.

I'd like to work on the US (even though Trump doesn't want me there), but I feel like the chances of that happening are pretty narrow. But anyways, I'm still preparing myself and if anyone has any tips I'd be glad to read them.

I just needed to vent out my worries, I guess.
 

Kalnos

Banned
I'm working on a project that my university requires me to do in order to graduate as well, which is a Django website that classifies the neighborhoods of a city by security levels and crime types using AI based on the crimes reported by citizens (it is kinda simple, actually). I chose that because I'm actually interested in machine learning and I'd love to work with something related to that, but I understand it might be hard for a new grad to inmediately start working on a field like that.

I'd like to work on the US (even though Trump doesn't want me there), but I feel like the chances of that happening are pretty narrow. But anyways, I'm still preparing myself and if anyone has any tips I'd be glad to read them.

I just needed to vent out my worries, I guess.

Definitely list that Django project on your resume. You may feel that 'it's kinda simple' but I have felt that way about all the projects I did and people love to see them. Throw it on your GitHub for sure.

Check out Cracking The Coding Interview. You may or may not have a tough technical interview but it will prepare you either way. If you actually attempt to get a job in the Bay Area then reading CTCI and other books like it become much more important.
 

upandaway

Member
Finally, I have one semester left of CS. So naturally, I start having fears about the difficulty of getting a job when I get out, and of course doubts about myself (that impostor syndrome, of course).

I've got the CLRS Algorithms book to study this summer, as I've got nothing to do in the mornings except working out. I also work within the IT department of my university 4 hours daily with some Java based administrative website that they're making for students and teachers (it's mandatory to students of almost all public universities here in Mexico to do 480 hours of free work) so I could probably put that on my resume.

I'm working on a project that my university requires me to do in order to graduate as well, which is a Django website that classifies the neighborhoods of a city by security levels and crime types using AI based on the crimes reported by citizens (it is kinda simple, actually). I chose that because I'm actually interested in machine learning and I'd love to work with something related to that, but I understand it might be hard for a new grad to inmediately start working on a field like that.

I'd like to work on the US (even though Trump doesn't want me there), but I feel like the chances of that happening are pretty narrow. But anyways, I'm still preparing myself and if anyone has any tips I'd be glad to read them.

I just needed to vent out my worries, I guess.
Deep learning is definitely moving in the direction of hiring less and less experienced people as tools become easier to use and there's more internet tutorials (I was hired for NLP deep learning research before even passing a ML class). Even if your area won't have stuff right now that'll easily change in a year or two. You might need to teach yourself some things but it's never been easier.
 

Koren

Member
Deep learning is definitely moving in the direction of hiring less and less experienced people as tools become easier to use and there's more internet tutorials
Has it ever been really that hard? One of my first coding experience in research was deep learning (or what would be called deep learning now) applied to particle physics, and honestly, I've had trouble with processing power, a bit with the language (a lot of people in CERN use an interactive variant of C++), but not with NN...

Granted, the choices for the NN shape and its parameters was a lot of trial and error, but has it really changed?
 

upandaway

Member
Has it ever been really that hard? One of my first coding experience in research was deep learning (or what would be called deep learning now) applied to particle physics, and honestly, I've had trouble with processing power, a bit with the language (a lot of people in CERN use an interactive variant of C++), but not with NN...

Granted, the choices for the NN shape and its parameters was a lot of trial and error, but has it really changed?
I can only speak for my country but until recently almost all job openings involved a masters/phd, recently I've seen the requirements lower a little. I don't know how it was before DL blew up though haha, I wasn't around! The way I understand it, because ML wasn't relevant when most of the current seniors were at school, they tried to make up for it with higher requirements in academic level, I could be wrong though.

Most of it right now for my team is staying on top of recent advancements despite how fast the whole field changes every year. You can't really write LSTM variants for your problem with only trial and error, but it's definitely a part of the package (though there has been neat progress lately in understanding NNs, CNN especially).
 

danthefan

Member
Sorry not strictly a programming question but thought you guys might be able to help.

I'm probably asking for something that's too good to be true but are there any free cloud based databases out there? I have a python script that scrapes some data from the web to a MySQL database on my desktop, but I'd like to be able to access the data all the time from my laptop for example, without having to run the code again.

I'm also thinking of putting the data in a basic website just to help me learn PHP a bit.

It's not sensitive at all so security isn't a massive concern, the data is all freely available anyway.

If not free then very cheap? Any other possible solutions?
 

Somnid

Member
Has it ever been really that hard? One of my first coding experience in research was deep learning (or what would be called deep learning now) applied to particle physics, and honestly, I've had trouble with processing power, a bit with the language (a lot of people in CERN use an interactive variant of C++), but not with NN...

Granted, the choices for the NN shape and its parameters was a lot of trial and error, but has it really changed?

I took some ML in college and forgot most of it (badly taught course) and have just recently started to go in hard on some of the more modern stuff with a mix of online resources. My impression is that the basics of ML are the same as they ever were, that hasn't changed in 30+ years. What did change seems to be the availability of the cloud which gives anyone access to super computer resources and just the fact that there's more information than ever floating around. That and even phones now have GUGPU capabilities to make use of. So there's just a lot more that's viable. And a couple new novel techniques like Generative adversarial networks.
 
So just in case anybody here ever comes across something similar and is wondering the answer, I got it working. The solution to this was:

g[x] := f[FieldInd[x]]

From there, we do:
D[ {g[a] + g,..., -g[c}, {g[a], g, g[c]}]

This associates every element in the finite field (a, b, c) to f [ integer ] by using the fact that the multiplicative group of the field is cyclic. f stops the integers from working on one another as integers. The function D then takes the partial derivative of each element using g[a] ,g,g[c] (now f[FieldInt[a]], f[FieldInt], f[FieldInt[c]]) which returns a vector that gives the coefficients of every element in the set with g applied to it.

From there you can put it in a matrix without issue.
 

Koren

Member
What did change seems to be the availability of the cloud which gives anyone access to super computer resource
Well, at this time, I was playing with some CERN clusters, it's true that I had some super computer ressources ;)

So there's just a lot more that's viable.
No argument here... I would expect the GPU computations helped, too (wanted to try, but never found the time).

And a couple new novel techniques like Generative adversarial networks.
I really should dive back into it...


So just in case anybody here ever comes across something similar and is wondering the answer, I got it working. The solution to this was:

g[x] := f[FieldInd[x]]

From there, we do:
D[ {g[a] + g,..., -g[c}, {g[a], g, g[c]}]

This associates every element in the finite field (a, b, c) to f [ integer ] by using the fact that the multiplicative group of the field is cyclic. f stops the integers from working on one another as integers. The function D then takes the partial derivative of each element using g[a] ,g,g[c] (now f[FieldInt[a]], f[FieldInt], f[FieldInt[c]]) which returns a vector that gives the coefficients of every element in the set with g applied to it.

From there you can put it in a matrix without issue.

Interesting, thanks... Pasted in the notebook for possible future reference ^_^

(though I admit, the Mathematica / Wolfram part is absolutely tiny)
 

upandaway

Member
I think it's safe to say that like 90% of the improvements in ML this decade is pure GPU advancement, only a small part is actual algorithms and novel solutions (GANs are definitely popular right now though)
 

Somnid

Member
It would be fun to pivot to doing ML, but it feels like entry level doesn't exist, especially with the disjoint of being considered senior in other fields. Seems like I'd just have to get lucky with a job that had some overlap to build up experience. Although it seems like the days of it being PhD only are fading as it's just considered a regular part of modern systems which makes me happy. I'm not paying 40K to learn this stuff when there's some great $10 content on Udemy.
 

Eridani

Member
I think it's safe to say that like 90% of the improvements in ML this decade is pure GPU advancement, only a small part is actual algorithms and novel solutions (GANs are definitely popular right now though)

I don't know if I'd agree with this. There is a ton of research going into ML right now, and most of it is most definitely not just throwing things at more and more powerful GPUs. That's mostly because ML is incredibly broad - even though neural networks are all the rage these days, they only cover a small amount of things people are trying to with machine learning. Traditional classification and clustering methods are still being worked on, graph learning is pretty huge right now, as is online learning, data fusion, natural language processing, Monte Carlo methods and a million other things. A lot of these cannot simply be solved by applying more GPU power.

And there's been some pretty huge algorithmic breakthroughs in recent years. Something as simple as dropout, for example, managed to bring huge improvements - it's not super new, but I'm pretty sure it's within this decade. Then there's AlphaGo, which blew everyone's mind and was only possible due to clever use of CNNs in combination with Monte Carlo methods. Word2Vec is a similar example of clever neural network applications. CNNs are especially interesting in this aspect, since they can be used in combination with so many different things that the possibilities for clever algorithms are essentially endless.

Even ignoring the big things, I've frequently come across ML papers with some really clever solutions for very specialized problems. And there's a ton of papers like this - take a common method, apply some neat twist to it and get a really interesting method that's very useful in specific circumstances.
 

Koren

Member
Have you a "state of the art" reference that deal with this topic? I'd really want to get up to date on this topic...

I keep using graphcuts, KNN, basic NN, etc., but I feel I miss something.

I stopped doing research some time ago, but I was toying with the idea of using NN to replace graphcuts on a classification project, and I never tried. I would love to correct this...
 

Eridani

Member
Have you a "state of the art" reference that deal with this topic? I'd really want to get up to date on this topic...

I keep using graphcuts, KNN, basic NN, etc., but I feel I miss something.

I stopped doing research some time ago, but I was toying with the idea of using NN to replace graphcuts on a classification project, and I never tried. I would love to correct this...

Sadly, no. I'm mostly speaking just about the things I've seen in papers relating to topics I've worked on (so mostly natural language processing and some basic computer vision, and even then I rarely used neural networks). As I've said, the whole ML landscape is just so incredibly broad that I really doubt there's a single reference that deals with all of it. If you want state of the art, then looking up papers on Google scholar that relate to your specific problem will probably give you some useful results.

If you want to experiment with neural networks though, that's really easy to do now. tensorflow (or other frameworks) make it pretty easy to screw around with NNs for a bit, since you just write some simple python code and get a highly optimized program.
 

Koren

Member
If you want to experiment with neural networks though, that's really easy to do now. tensorflow (or other frameworks) make it pretty easy to screw around with NNs for a bit, since you just write some simple python code and get a highly optimized program.
Mmm... Could (also) be a really useful ressource for my students that want to deal with this kind of thing.

Many thanks (heard about some of this, but I didn't know such a thing was available)
 

upandaway

Member
I don't know if I'd agree with this. There is a ton of research going into ML right now, and most of it is most definitely not just throwing things at more and more powerful GPUs. That's mostly because ML is incredibly broad - even though neural networks are all the rage these days, they only cover a small amount of things people are trying to with machine learning. Traditional classification and clustering methods are still being worked on, graph learning is pretty huge right now, as is online learning, data fusion, natural language processing, Monte Carlo methods and a million other things. A lot of these cannot simply be solved by applying more GPU power.

And there's been some pretty huge algorithmic breakthroughs in recent years. Something as simple as dropout, for example, managed to bring huge improvements - it's not super new, but I'm pretty sure it's within this decade. Then there's AlphaGo, which blew everyone's mind and was only possible due to clever use of CNNs in combination with Monte Carlo methods. Word2Vec is a similar example of clever neural network applications. CNNs are especially interesting in this aspect, since they can be used in combination with so many different things that the possibilities for clever algorithms are essentially endless.

Even ignoring the big things, I've frequently come across ML papers with some really clever solutions for very specialized problems. And there's a ton of papers like this - take a common method, apply some neat twist to it and get a really interesting method that's very useful in specific circumstances.
I agree with all of that, my angle was that on ground terms, things that were known before year 2000 achieve pretty amazing results that are frighteningly close to state of the art considering how much progress happened since then. In terms of pure accuracy % gains I think that a majority of it comes from computational power. It's all about how you apply it to different problems, or how you push it to (slightly better) new state of the art that the research goes into from what I've been exposed to. Like, LSTM is still the best we can do for language modeling which is from 1997 (slightly better with variants), and it says something that you have to make something so out there and so complicated as an HM-LSTM to improve it by a tiny little bit more. AlphaGo is super impressive but to me the genius was how they were able to take such a complex variety of different algorithms and make them work together. I'll give you dropout, regularization solutions in general have done a lot recently (we can even count things like Nvidia's float16 craziness). Reinforcement learning is one thing that no amount of computation power could have helped in if it weren't for deep mind's recent progress, definitely.

I have a professor in my uni who takes it to the most extreme and says every single novel idea will eventually be made obsolete when we'll have enough computation power, haha (but he doesn't care about RL). Dunno if I can buy that yet. His source of confidence is that he made a ton of money a couple of years ago when his startup made a classifier for files (malware/not malware) with no cyber knowledge, that performed better than the handcrafted cyber solutions (and I think even caught the NSA zero day exploits without being trained on it).
 
Top Bottom