C++  Given multiple binary strings, produce binary string where nth bit is set if in all given strings is nth bit same
On input I am given multiple uint32_t
numbers, which are in fact binary strings of length 32. I want to produce binary string ( a.k.a. another uint32_t
number ) where nth bit is set to 1
if nth bit in every given string from input is same.
Here is simple example with 4 bit string ( just more simple instance of same problem ) :
input: 0011, 0101, 0110
output: 1000
because: first bit is same in every string on input, therfore first bit in output will be set to 1
and 2nd,3rd and 4th will be set to 0
because they have different values.
What is the best way to produce output from given input? I know that I need to use bitwise operators but I don't know which of them and in which order.
uint32_t getResult( const vector< uint32_t > & data ){
//todo
}
2 answers

You want the bits where all the source bits are 1 and the bits where all the source bits are 0. Just AND the source values and the NOT of the source values, then OR the results.
uint32_t getResult( const vector< uint32_t > & data ){ uint32_t bitsSet = ~0; uint32_t bitsClear = ~0; for (uint32_t d : data) { bitsSet &= d; bitsClear &= ~d; } return bitsSet  bitsClear }

First of all you need to loop over the vector, of course.
Then we can use XOR of the current element and the next element. Save the result.
For the next iteration, do the same: XOR of current element with the next element. But then bitwise OR with the saved result of the previous iteration. Save this result. Then continue with this until you have iterated over all (minus one) elements.
The saved result is the complement of the what you want.
Taking your example numbers (
0011
,0101
and0110
) then the first iteration we have0011 ^ 0101
which results in0110
. The next iteration we have0101 ^ 0110
which results in0011
. Bitwise OR with the previous result (0110  0011
) gives0111
. End of loop, and bitwise complement give the result1000
.
See also questions close to this topic

ld: symbol(s) not found with the Eigen library
I am writing a finite element program in C++ using the Eigen library. Though, the linker doesn't seems to recognize all my files.
This is the error I get:
Undefined symbols for architecture x86_64: "Eigen::Matrix<float, 3, 1, 0, 3, 1> pear::extract<Eigen::Matrix<float, 3, 1, 0, 3, 1>, Eigen::Matrix<float, 1, 1, 0, 1, 1>, Eigen::Block<Eigen::Ma trix<int, 1, 1, 0, 1, 1>, 1, 1, false> >(Eigen::Matrix<float, 1, 1, 0, 1, 1> const&, Eigen::Block<Eigen::Matrix<int, 1, 1, 0, 1, 1>, 1, 1, false> const&)", referenced from: pear::stiff(Eigen::Matrix<float, 1, 1, 0, 1, 1>&, Eigen::Matrix<float, 1, 1, 0, 1, 1>&, Eigen::Matrix<int, 1, 1, 0, 1, 1>&) in stiff.o "Eigen::Matrix<float, 1, 1, 0, 1, 1> pear::load_csv<Eigen::Matrix<float, 1, 1, 0, 1, 1> >(std::__cxx11::basic_string<char, std::char_traits<char> , std::allocator<char> > const&)", referenced from: _main in main.o "Eigen::Matrix<int, 1, 1, 0, 1, 1> pear::load_csv<Eigen::Matrix<int, 1, 1, 0, 1, 1> >(std::__cxx11::basic_string<char, std::char_traits<char> , std::allocator<char> > const&)", referenced from: _main in main.o ld: symbol(s) not found for architecture x86_64
My main file is the following one:
#include "eigen_ext.hpp" #include "stiff.hpp" #include <iostream> #include <Eigen/Core> int main(int args, char *argv[]) { Eigen::MatrixXf stiff_matrix; Eigen::MatrixXi node; Eigen::VectorXf xp, yp; node = pear::load_csv<Eigen::MatrixXi>("../mesh/node.csv"); xp = pear::load_csv<Eigen::VectorXf>("../mesh/xp.csv"); yp = pear::load_csv<Eigen::VectorXf>("../mesh/yp.csv"); return 0; }
and the file in question is the following one, eigen_ext.cpp (I founded some of the functions on this same forum):
#include "eigen_ext.hpp" namespace pear { template <typename T1, typename T2, typename T3> T1 extract(const T2 &full, const T3 &ind) { int num_indices = ind.innerSize(); T1 target(num_indices); for (int i = 0; i < num_indices; i++) { target[i] = full[ind[i]]; } return target; } // namespace // peartemplate<typenameT1,typenameT2,typenameT3>T1extract(constT2&full,constT3&ind) template <typename M> M load_csv(const std::string &path) { std::ifstream indata; indata.open(path); std::string line; std::vector<double> values; uint64_t rows = 0; while (std::getline(indata, line)) { std::stringstream lineStream(line); std::string cell; while (std::getline(lineStream, cell, ',')) { values.push_back(std::stod(cell)); } ++rows; } return Eigen::Map< const Eigen::Matrix<typename M::Scalar, M::RowsAtCompileTime, M::ColsAtCompileTime, Eigen::RowMajor>>( values.data(), rows, values.size() / rows); } } // namespace pear
Here is an excerpt of the stiff.cpp containing the beginning of the stiff function, where the extract function is called:
MatrixXf stiff(VectorXf &xp, VectorXf &yp, MatrixXi &node) { /* xp(i) : xcoordinates of the nodes yp (i): ycoordinates of the nodes node(i,j,k) : edges matrix (for each triangle, the indices of the three nodes) */ /* m : number of triangles of triangulation n : number of vertices of triangulation (edges??) */ int n = node.rows(); // number of vertices int m = n  2; // each points creates a new triangle except for the two first // ones (can be seen as a variation of Euler's characteristic) // stiff matrix declaration MatrixXf S = MatrixXf::Zero(n, n), Dphi(3, 2); Vector3f x = Vector3f::Zero(), y = Vector3f::Zero(); float D = 1; int i, j;
All files except the main.cpp have a .hpp header containing reference to the libraries and the definitions of the functions in the .cpp equivalent.
My makefile is the following one:
#MAKEFILE FOR WIT PROJECT .PHONY: clean all info #CXX = g++ CXX = gcc7 #CXX = clang TARGETS := eigen_ext stiff tests main SOURCES := $(TARGETS:=.cpp) OBJS := $(TARGETS:=.o) OFLAGS := O2 O3 ffastmath PARALLELFLAGS:= D_GLIBCXX_PARALLEL fopenmp pthread DUSEOMP CXXFLAGS := Wall std=c++14 v LDFLAGS := LIBS := lstdc++ I../Eigen LIPOPT := OPENMPLIB := EXAMPLE_DEPS = Makefile all: main clean: rm f $(OBJS) $(TARGETS) info: @echo Compiler: CXX = $(CXX) @echo Compile command: COMPILE.cc = $(COMPILE.cc) @echo Link command: LINK.cc = $(LINK.cc) eigen_ext.o: eigen_ext.cpp $(EXAMPLE_DEPS) @$(CXX) c $(CXXFLAGS) $(OFLAGS) $(LIBS) o eigen_ext.o eigen_ext.cpp stiff.o: stiff.cpp $(EXAMPLE_DEPS) @$(CXX) c $(CXXFLAGS) $(OFLAGS) $(LIBS) o stiff.o stiff.cpp main.o: main.cpp $(EXAMPLE_DEPS) @$(CXX) c $(CXXFLAGS) $(OFLAGS) $(LIBS) o main.o main.cpp main: main.o stiff.o eigen_ext.o @$(CXX) $(LDFLAGS) o main main.o stiff.o eigen_ext.o $(LIBS)
I am using g++ version 7.2.0 in macOS (installed with homebrew). All the object files exist after compilation and I have no compilation error except the not founded symbols.
After reading everywhere about linking and libraries for weeks, I am still stuck at the same problem. I suppose the problem is very simple to solve, though, I cannot find it. For this reasons, I decided to ask it here. Thank you very much!

Find elements in Mat, Opencv c++
I came across this:
cv::Mat Mat_out; cv::Mat Mat2(openFingerCentroids.size(), CV_8UC1, cv::Scalar(2)); imshow("Mat2", Mat2); cv::Mat Mat3(openFingerCentroids.size(), CV_8UC1, cv::Scalar(3)); imshow("Mat3", Mat3); cv::bitwise_and(Mat2, Mat3, Mat_out); imshow("Mat_out", Mat_out);
Why does
Mat_out
contain all 2? Bitwise operation of a matrix of all 2s and 3s should give me 0, right? Since 2 is not equal to 3?Anyway, this is the simple thing I tried to implement: (like
find
function of MATLAB)Mat_A = {1, 1, 0, 9, 0, 5; 5, 0, 0, 0, 9, 0; 1, 2, 0, 0, 0, 0};
Output expected, if I'm searching for all 5s:
Mat_out = {0, 0, 0, 0, 0, 5; 5, 0, 0, 0, 0, 0; 0, 0, 0, 0, 0, 0 };
How can I do this in OpenCV using C++??

calling sizeof in constructor initilizer list in multiple inhertance
Here we are calling sizeof operator on the derived class WData1. As I know, first base class constructor (Persistent) will be called. Till now WData1 doesn't exist because class Persistent constructor is being called and class Data is waiting for his turn.
*
class WData1 : public Persistent, public Data { public: WData1(float f0 = 0.0, float f1 = 0.0, float f2 = 0.0) : Data(f0, f1, f2), Persistent(sizeof(WData1)) {}};
*
My question is how sizeof will behave on derived class which doesn't exist yet?

Bitwise operations with Floats Python
I have a small problem with the bitwise operations in python. I would like to convert a float into a byte.
But here is my problem: I can only solve it with int and not with floats. Is it possible to do the same as shown below with a float datatype?
Here is the code which works if the variable floatvalue is an int. But I would like to do the same with a float:
def float2byte(floatvalue): msb = floatvalue >> 8 lsb = floatvalue & 0xFF return msb, lsb
Thank you very much in advance! (:

Add two numbers using bit manipulation
I'm working on the following practice problem from GeeksForGeeks:
Write a function Add() that returns sum of two integers. The function should not use any of the arithmetic operators (+, ++, –, , .. etc).
The given solution in C# is:
public static int Add(int x, int y) { // Iterate till there is no carry while (y != 0) { // carry now contains common set bits of x and y int carry = x & y; // Sum of bits of x and y where at least one of the bits is not set x = x ^ y; // Carry is shifted by one so that adding it to x gives the required sum y = carry << 1; } return x; }
Looking at this solution, I understand how it is happening; I can follow along with the debugger and anticipate the value changes before they come. But after walking through it several times, I still don't understand WHY it is happening. If this was to come up in an interview, I would have to rely on memory to solve it, not actual understanding of how the algorithm works.
Could someone help explain why we use certain operators at certain points and what those totals are suppose to represent? I know there are already comments in the code, but I'm obviously missing something...

How to interpret memory as big endian int16_t in x64 assembly?
I'd like to better understand how an Intel CPU might efficiently pull big endian
int16_t
's from memory. My assembly's not the greatest so I'm looking at the assembly results from clang.One way to accomplish this in C would be:
int16_t n = bytes[i+2]  (int16_t)bytes[i+1] << 8;
Clang (with no flags) converts this line to:
movl 36(%rbp), %edx addl $2, %edx movslq %edx, %rax movzbl 32(%rbp,%rax), %edx movl 36(%rbp), %esi addl $1, %esi movslq %esi, %rax movzbl 32(%rbp,%rax), %esi movw %si, %di movswl %di, %esi shll $8, %esi orl %esi, %edx movw %dx, %di movw %di, 46(%rbp)
I know there's a lot going on in the C code but this seems kinda crazy since it seems like just a matter of masks and moves.
Furthermore it seems like there must be something that could help out in the 981 x64 instructions (or whatever number you like to use).
Is there a more succinct way to do read memory as a big endian int16_t?

What am I doing wrong? Conversion from HEX to DEC
I'm trying to convert a HEX number to a DEC. The HEX is inverted.
F6FD
it should beFDF6
int a = 0xFD; int b = 0xF6 << 8; int res = a  b;
And output =
10
but I expect522
. And if I do in this wayunsigned int res2 = (unsigned char) 0xFD  (unsigned char) 0xF6 << 8;
The output is
65014
and not522
. What am I doing wrong? 
Searching for a bit pattern in an unsigned int
I'm learning C through Kochan's Programming in C. One of the exercises is the following:
Write a function called
bitpat_search()
that looks for the occurrence of a specified pattern of bits inside anunsigned int
. The function should take three arguments, and should be called as such:bitpat_search (source, pattern, n)
The function searches for the integer
source
, starting at the leftmost bit, to see if the rightmost n bits ofpattern
occur insource
. If the pattern is found, have the function return the number of the bit at which the pattern begins, where the leftmost bit is number 0. If the pattern is not found, then have the function return 1. So, for example, the callindex = bitpat_search (0xe1f4, 0x5, 3);
causes the
bitpat_search()
function to search the number 0xe1f4 (= 1110 0001 1111 0100 binary) for the occurrence of the threebit pattern 0x5 (= 101 binary). The function returns11
to indicate that the pattern was found in thesource
beginning with bit number 11.Make certain that the function makes no assumptions about the size of an
int
.This is the way I implemented the function:
#include <stdio.h> int bitpat_search(unsigned int source, unsigned int pattern, int n); int int_size(void); int main(void) { printf("%i\n", bitpat_search(0xe1f4, 0x5, 3)); return 0; } int bitpat_search(unsigned int source, unsigned int pattern, int n) { int size = int_size(); pattern <<= (size  n); unsigned int compare = source; int bitnum = 0; while (compare) { compare >>= (size  n); compare <<= (size  n); if (compare & pattern) { return bitnum; } else { source <<= 1; bitnum++; compare = source; } } return 1; } // Calculates the size of an integer for a particular computer int int_size(void) { int count = 0; unsigned int x = ~0; while (x) { ++count; x >>= 1; } printf("%i\n", count); return count; }
First, I calculate the size of an integer (can't use
sizeof()
). Then, I align the pattern that we are looking for so that it starts from the MSB. I create a temporary variablecompare
and assign it the value ofsource
and I also initialize a variablebitnum
to 0; it will keep track of the position of the bits we are comparing.Within the loop I shift
compare
to the right and left (adding 0's to the right and left of the bits that will be compared to the bit pattern), then I compare the values: if true, the bit number is returned, otherwise, source is shifted once to the left and then assigned tocompare
(this essentially shifts the position of the bit that we are comparing incompare
to the right) andbitnum
is incremented. The loop stops executing ifpattern
wasn't found insource
and1
is returned, as per the instructions.However, my program's output turns out to be 14, not 11. I followed the program through pencil and paper and didn't understand what went wrong... Help?

Using shift operator in swift
I want to use shift operator to solve my issue below
1010 > 1000 1100 > 1000 10010 > 10000 11000 > 10000
Is it possible to use
shift operator
? 
I cannot seem to get SDL to work, with sublime text
I am making a simple game and wanted to use SDL for the graphics. I run linux unbuntu, and use sublime text editor, g++ compiler, and I am coding in c++. I downloaded SDL and followed the steps on http://lazyfoo.net/tutorials/SDL/01_hello_SDL/linux/index.php
After I followed those steps, all of the SDL error stopped appearing. However, the flag variables aren't working
#include <SDL2/SDL.h> Risk() { SDL_Init(SDL_INIT_HAPTIC); window = SDL_CreateWindow("Board",SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 500,500, SDL_WINDOW_RESIZABLE); SDL_GetError(); }
That is the code, I would show you the rest but it really isn't necessary I think. The error appearing in my compiler is:
tom@TBTXPS139360:~/Documents/Subjects/CS/Fun/Risk$ g++ std=c++14 Game.cpp W/tmp/ccLwSxiL.o: In function `Risk::Risk()': Game.cpp:(.text._ZN4RiskC2Ev[_ZN4RiskC5Ev]+0x1f): undefined reference to `SDL_Init' Game.cpp:(.text._ZN4RiskC2Ev[_ZN4RiskC5Ev]+0x44): undefined reference to `SDL_CreateWindow' Game.cpp:(.text._ZN4RiskC2Ev[_ZN4RiskC5Ev]+0x54): undefined reference to `SDL_GetError' collect2: error: ld returned 1 exit status
Please help and be kind, I am just trying to learn.
I think the error is the SDL libraries are in the wrong place, or Sublime doesn't know where they are.

Simple explanation of uint32_t meaning
I need to understand the meaning of uint32_t integers and operations between them. What is the actual meaning of 1UL << 24? What does UL, the "<<" symbol and 24 stand for? I can't find some simple explanations on the net, can I see some examples?

Memcpy uint32_t into char*
I testing a bit with different formats and stuff like that. And we got a task where we have to put uint32_t into char*. This is the code i use:
void appendString(string *s, uint32_t append){ char data[4]; memcpy(data, &append, sizeof(append)); s>append(data); } void appendString(string *s, short append){ char data[2]; memcpy(data, &append, sizeof(append)); s>append(data); }
From string to char is simple and we have to add multiple uints into the char*. So now i'm just calling it like:
string s; appendString(&s, (uint32_t)1152); //this works appendString(&s, (uint32_t)640); //this also works appendString(&s, (uint32_t)512); //this doesn't work
I absolutely don't understand why the last one isn't working properly. I've tested multiple variations of transform this. One way always gave me output like (in bits): 00110100  00110101 ... so the first 2 bits are always zero, followed by 11 and then for me some random numbers.. What am i doing wrong?